Patterns and Operational Insights from the First Users of Delta LakeDatabricks
Cyber threat detection and response requires demanding work loads over large volumes of log and telemetry data. A few years ago I came to Apple after building such a system at another FAANG company, and my boss asked me to do it again.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
pg_chameleon MySQL to PostgreSQL replica made easyFederico Campoli
pg_chameleon is a lightweight replication system written in python. The tool can connect to the mysql replication protocol and replicate the data changes in PostgreSQL.
pg_chameleon is a lightweight replication system written in python. The tool can connect to the mysql replication protocol and replicate the data changes in PostgreSQL.
Whether the user needs to setup a permanent replica between MySQL and PostgreSQL or perform an engine migration, pg_chamaleon is the perfect tool for the job.
The talk will cover the history the current implementation and the future releases.
The audience will learn how to setup a replica from MySQL to PostgreSQL in few easy steps. There will be also a coverage on the lessons learned during the tool’s development cycle.
Patterns and Operational Insights from the First Users of Delta LakeDatabricks
Cyber threat detection and response requires demanding work loads over large volumes of log and telemetry data. A few years ago I came to Apple after building such a system at another FAANG company, and my boss asked me to do it again.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
pg_chameleon MySQL to PostgreSQL replica made easyFederico Campoli
pg_chameleon is a lightweight replication system written in python. The tool can connect to the mysql replication protocol and replicate the data changes in PostgreSQL.
pg_chameleon is a lightweight replication system written in python. The tool can connect to the mysql replication protocol and replicate the data changes in PostgreSQL.
Whether the user needs to setup a permanent replica between MySQL and PostgreSQL or perform an engine migration, pg_chamaleon is the perfect tool for the job.
The talk will cover the history the current implementation and the future releases.
The audience will learn how to setup a replica from MySQL to PostgreSQL in few easy steps. There will be also a coverage on the lessons learned during the tool’s development cycle.
Deep Dive Amazon Redshift for Big Data Analytics - September Webinar SeriesAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
This one is about advanced indexing in PostgreSQL. It guides you through basic concepts as well as through advanced techniques to speed up the database.
All important PostgreSQL Index types explained: btree, gin, gist, sp-gist and hashes.
Regular expression indexes and LIKE queries are also covered.
The latest version of my PostgreSQL introduction for IL-TechTalks, a free service to introduce the Israeli hi-tech community to new and interesting technologies. In this talk, I describe the history and licensing of PostgreSQL, its built-in capabilities, and some of the new things that were added in the 9.1 and 9.2 releases which make it an attractive option for many applications.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
How to boost your datamanagement with Dremio ?Vincent Terrasi
Works with any source. Relational, non-relational, 3rd party apps. 5 years ago nobody was using Hadoop, MongoDB, and 5 years from now there will be new products. You need a solution that is future proof.
Works with any BI tool. In every company multiple tools are in use. Each department has their favorite. We need to work with all of them.
No ETL, data warehouse, cubes. This would need to give you a really good alternative to these options.
Makes data self-service, collaborative. Probably most important of all, we need to change the dynamic between the business and IT. We need to make it so business users can get the data they want, in the shape they want it, without waiting on IT.
Makes Big Data feels small. It needs to make billions of rows feel like a spreadsheet on your desktop.
Open source. It’s 2017, so we think this has to be open source.
Adventures with the ClickHouse ReplacingMergeTree EngineAltinity Ltd
Presentation on ReplacingMergeTree by Robert Hodges of Altinity at the 14 December 2022 SF Bay Area ClickHouse Meetup (https://www.meetup.com/san-francisco-bay-area-clickhouse-meetup/events/289605843/)
MySQL users commonly ask: Here's my table, what indexes do I need? Why aren't my indexes helping me? Don't indexes cause overhead? This talk gives you some practical answers, with a step by step method for finding the queries you need to optimize, and choosing the best indexes for them.
Evolution of MongoDB Replicaset and Its Best PracticesMydbops
There are several exciting and long-awaited features released from MongoDB 4.0. He will focus on the prime features, the kind of problem it solves, and the best practices for deploying replica sets.
Modeling Data and Queries for Wide Column NoSQLScyllaDB
Discover how to model data for wide column databases such as ScyllaDB and Apache Cassandra. Contrast the differerence from traditional RDBMS data modeling, going from a normalized “schema first” design to a denormalized “query first” design. Plus how to use advanced features like secondary indexes and materialized views to use the same base table to get the answers you need.
Deep Dive Amazon Redshift for Big Data Analytics - September Webinar SeriesAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
This one is about advanced indexing in PostgreSQL. It guides you through basic concepts as well as through advanced techniques to speed up the database.
All important PostgreSQL Index types explained: btree, gin, gist, sp-gist and hashes.
Regular expression indexes and LIKE queries are also covered.
The latest version of my PostgreSQL introduction for IL-TechTalks, a free service to introduce the Israeli hi-tech community to new and interesting technologies. In this talk, I describe the history and licensing of PostgreSQL, its built-in capabilities, and some of the new things that were added in the 9.1 and 9.2 releases which make it an attractive option for many applications.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
How to boost your datamanagement with Dremio ?Vincent Terrasi
Works with any source. Relational, non-relational, 3rd party apps. 5 years ago nobody was using Hadoop, MongoDB, and 5 years from now there will be new products. You need a solution that is future proof.
Works with any BI tool. In every company multiple tools are in use. Each department has their favorite. We need to work with all of them.
No ETL, data warehouse, cubes. This would need to give you a really good alternative to these options.
Makes data self-service, collaborative. Probably most important of all, we need to change the dynamic between the business and IT. We need to make it so business users can get the data they want, in the shape they want it, without waiting on IT.
Makes Big Data feels small. It needs to make billions of rows feel like a spreadsheet on your desktop.
Open source. It’s 2017, so we think this has to be open source.
Adventures with the ClickHouse ReplacingMergeTree EngineAltinity Ltd
Presentation on ReplacingMergeTree by Robert Hodges of Altinity at the 14 December 2022 SF Bay Area ClickHouse Meetup (https://www.meetup.com/san-francisco-bay-area-clickhouse-meetup/events/289605843/)
MySQL users commonly ask: Here's my table, what indexes do I need? Why aren't my indexes helping me? Don't indexes cause overhead? This talk gives you some practical answers, with a step by step method for finding the queries you need to optimize, and choosing the best indexes for them.
Evolution of MongoDB Replicaset and Its Best PracticesMydbops
There are several exciting and long-awaited features released from MongoDB 4.0. He will focus on the prime features, the kind of problem it solves, and the best practices for deploying replica sets.
Modeling Data and Queries for Wide Column NoSQLScyllaDB
Discover how to model data for wide column databases such as ScyllaDB and Apache Cassandra. Contrast the differerence from traditional RDBMS data modeling, going from a normalized “schema first” design to a denormalized “query first” design. Plus how to use advanced features like secondary indexes and materialized views to use the same base table to get the answers you need.
MongoDB 2.8 Replication Internals: Fitting it all togetherScott Hernandez
MongoDB replication internal architecture for 2.8
Abstract:
Replication in MongoDB requires deep integration with almost every part of the codebase, and has important hooks in various systems like storage, indexing, command processing and querying. Most of the replication components have seen a major overhaul recently in order to make further improvements. In this talk we will address what those pieces are, how they interact, and interesting choices made during their design. In this talk we get into the interaction of the replication protocols, commands really, writes and write concern enforcement, consensus (elections/ leader/follower/ majority) behaviors, and down into the depths of oplog generation and application on replicas. While a large part of the talk will be a technical overview of the big pieces we will dive into many important areas in order to ensure better understanding. The audience will be able to greatly affect which areas we focus on during the session, so come with ideas and a focus.
As presented at Confoo 2013.
More than some arcane NoSQL tool, Redis is a simple but powerful swiss army knife you can begin using today.
This talk introduces the audience to Redis and focuses on using it to cleanly solve common problems. Along the way, we'll see how Redis can be used as an alternative to several common PHP tools.
Noah Davis & Luke Melia of Weplay share a series of examples of Redis in the real world. In doing so, they cover a survey of Redis' features, approach, history and philosophy. Most examples are drawn from the Weplay team's experience using Redis to power features on Weplay.com, a social site for youth sports.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Redis and Groovy and Grails - gr8conf 2011
1. Redis & Groovy & Grails
by Ted Naleid
http://naleid.com
Monday, June 20, 2011
2. “Redis is a collection of data
structures exposed over the
network”
from: http://nosql.mypopescu.com/post/5403851771/what-is-redis
Monday, June 20, 2011
3. key/value store
like memcached on steroids
Monday, June 20, 2011
4. Strings, Integers,
Lists, Hashes,
Sets & Sorted Sets
(& commonly expected operations with each data type)
Monday, June 20, 2011
8. “Memory is the new Disk,
Disk is the new Tape”
- Jim Gray
Monday, June 20, 2011
9. Relative Latency
CPU Register - 1x
L2 Cache - 10x
Memory - 100x
Disk - 10,000,000x
analogy from “Redis - Memory as the New Disk” - Tim Lossen &
http://en.wikipedia.org/wiki/Orders_of_magnitude_(speed)
Monday, June 20, 2011
10. CPU Register
1 yard
photo: http://www.flickr.com/photos/limonada/904754668/
Monday, June 20, 2011
15. % telnet localhost 6379
Escape character is '^]'.
set foo bar
+OK
get foo
$3
bar
rpush mylist first
:1
rpush mylist second
:2
lrange mylist 0 -1
*2
$5
first
$6
second
Monday, June 20, 2011
16. clients for every* language
*well not every language, but all the popular/semi-popular ones, you can easily write one if
your language doesn’t have one
Monday, June 20, 2011
26. other uses...
distributed locks, tag clouds, session tokens,
auto-complete prefixes, API rate limiting, leaderboards,
capped logs, random set items, A/B testing data storage,
unique per user product pricing/sorting
Monday, June 20, 2011
46. Hashes
hkeys (hash keys)
Redis REPL Groovy
bar baz
foo
qux quxx
> hkeys foo redis.hkeys("foo")
1) "bar" <= [bar, qux]
2) "qux"
bar qux
Monday, June 20, 2011
47. Sets
sadd (set add)
Redis REPL Groovy
> sadd m1 jan redis.sadd("m1", "jan")
m1 jan
(integer) 1 <= 1
Monday, June 20, 2011
48. Sets
sadd (set add)
Redis REPL Groovy
feb
> sadd m1 feb redis.sadd("m1", "feb")
m1
(integer) 1 <= 1
jan
Monday, June 20, 2011
49. Sets
sismember (membership test)
Redis REPL Groovy
feb
m1
jan
> sismember m1 jan redis.sismember("m1", "jan")
(integer) 1 <= true
1
Monday, June 20, 2011
50. Sets
sismember (membership test)
Redis REPL Groovy
feb
m1
jan
> sismember m1 mar redis.sismember("m1", "mar")
(integer) 0 <= false
0
Monday, June 20, 2011
51. Sets
smembers (get full set)
Redis REPL Groovy
feb
m1
jan
> smembers m1 redis.smembers("m1")
1) "feb" <= [feb, jan]
2) "jan"
feb
jan
Monday, June 20, 2011
52. Sets
sinter (set intersection)
Redis REPL Groovy
feb feb
m1 m2
jan mar
> sinter m1 m2 redis.sinter("m1", "m2")
1) "feb" <= ["feb"]
feb
Monday, June 20, 2011
53. Sets
sdiff (set difference)
Redis REPL Groovy
feb feb
m1 m2
jan mar
> sdiff m1 m2 redis.sdiff("m1", "m2")
1) "jan" <= ["jan"]
jan
Monday, June 20, 2011
54. Sets
sunion (set union)
Redis REPL Groovy
feb feb
m1 m2
> sunion m1 m2 jan mar
1) "mar" redis.sunion("m1", "m2")
2) "jan" mar <= ["mar", "jan", "feb"]
3) "feb"
jan
feb
Monday, June 20, 2011
55. Sorted Sets
zadd (add with score)
Redis REPL Groovy
> zadd z1 1 jan redis.zadd("z1", 1, "jan")
z1 1 jan
(integer) 1 <= 1
Monday, June 20, 2011
56. Sorted Sets
zscore (score for member)
Redis REPL Groovy
1 jan
z1 2 feb
> zscore z1 feb 3 mar redis.zscore("z1", "feb")
"2" <= 2.0
2
Monday, June 20, 2011
57. Sorted Sets
zrange (sorted subset)
Redis REPL Groovy
1 jan
z1 2 feb
> zrange z1 0 1 withscores
1) "jan" 3 mar redis.zrangeWithScores("z1", 0, 1)
2) "1" <= [["jan", 1], ["feb", 2]]
3) "feb"
4) "2"
1 jan
2 feb
Monday, June 20, 2011
58. Sorted Sets
zrangebyscore (subset having score range)
Redis REPL Groovy
1 jan
z1 2 feb
> zrangebyscore z1 2 3 withscores
1) "feb" 3 mar redis.zrangeByScoreWithScores("z1",2,3)
2) "2" <= [["feb", 2], ["mar", 3]]
3) "mar"
4) "3"
2 feb
3 mar
Monday, June 20, 2011
62. Producer
pushes work on a list with lpush
@Grab('redis.clients:jedis:2.0.0')
redis = new redis.clients.jedis.Jedis("localhost")
args.each { redis.lpush("welcome-wagon", it) }
Monday, June 20, 2011
63. Consumer
uses blpop (blocking left pop from list)
@Grab('redis.clients:jedis:2.0.0')
redis = new redis.clients.jedis.Jedis("localhost")
println "Joining the welcome-wagon!"
while (true) {
def name = redis.blpop(0, "welcome-wagon")[1]
println "Welcome ${name}!"
}
Monday, June 20, 2011
64. Mass Producer
srandmember to randomly pick female name from set
@Grab('redis.clients:jedis:2.0.0')
redis = new redis.clients.jedis.Jedis("localhost")
if (!redis.exists("female-names")) {
new File("./female-names.txt").eachLine {redis.sadd("female-names",it)}
}
for (i in 1..100000) {
redis.lpush("welcome-wagon", redis.srandmember("female-names"))
if (i % 1000 == 0) println "Adding $i"
}
female-names.txt from: http://antirez.com/post/autocomplete-with-redis.html
Monday, June 20, 2011
69. RedisTagLib
<redis:memoize key="mykey" expire="3600">
<!--
insert expensive to generate GSP content here
content will be executed once, subsequent calls
will pull from redis (redis.get(“mykey”)) till the key expires
-->
</redis:memoize>
Monday, June 20, 2011
70. RedisService
Spring bean wraps pool connection
// overrides propertyMissing and methodMissing to delegate to redis
def redisService
redisService.foo = "bar"
assert "bar" == redisService.foo
redisService.sadd("months", "february")
assert true == redisService.sismember("months", "february")
Monday, June 20, 2011
71. RedisService
template methods manage pooled Redis connection
redisService.withRedis { Jedis redis ->
redis.set("foo", "bar")
}
Monday, June 20, 2011
73. RedisService
String memoization
redisService.memoize("my-key") { Jedis redis ->
// expensive operation we only want to execute once
}
def ONE_HOUR = 3600 // with optional timeout in seconds
redisService.memoize("my-key-with-timeout", ONE_HOUR) { Jedis redis ->
// expensive operation we want to execute every hour
}
Monday, June 20, 2011
74. RedisService
Domain Class memoization (stores IDs hydrates from DB)
def key = "user:$id:friends-books"
redisService.memoizeDomainList(Book, key, ONE_HOUR) { redis ->
// expensive process to calculate all friend’s books
// stores list of Book ids, hydrates them from DB
}
Monday, June 20, 2011
75. Example
Showing Products with Sort/Filter/Pagination Criteria
Monday, June 20, 2011
76. Other Memoization Methods
memoizeHash, memoizeHashField,
memoizeScore (sorted set score)
Monday, June 20, 2011
79. Can be used in conjunction
with Hibernate
Monday, June 20, 2011
80. Partial support for GORM
including Dynamic Finders, Criteria, Named Queries and “Transactions”
Monday, June 20, 2011
81. Limitations
It requires explicit index mapping on fields you want to query
package com.example
class Author {
String name
static mapWith = "redis"
static hasMany = [books: Book]
static mapping = {
name index:true
}
}
Monday, June 20, 2011
82. Under The Covers
MONITOR output for new Author(name: "Stephen King").save()
1308027697.922839 "INCR" "com.example.Author.next_id"
1308027697.940021 "HMSET" "com.example.Author:1" "name" "Stephen King" "version" "0"
1308027697.940412 "SADD" "com.example.Author.all" "1"
1308027697.943318 "SADD" "com.example.Author:id:1" "1"
1308027697.943763 "ZADD" "com.example.Author:id:sorted" "1.0" "1"
1308027697.944911 "SADD" "com.example.Author:name:Stephen+King" "1"
Monday, June 20, 2011