SlideShare a Scribd company logo
1 of 9
Download to read offline
Playtika Bets on Big Data Analytics to Deliver Captivating
Social Gaming Experiences and Engagement
Transcript of a sponsored discussion on how Playtika is using data science going and an
architectural approach to conquer some of these hurdles around volume, velocity, and variety of
data.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this
ongoing discussion on IT innovation and how it’s making an impact on
people’s lives.
Our next big-data case study discussion explores how social gaming company
Playtika uses big-data analytics to deliver captivating user experiences and
engagement.
We'll learn how feedback from user action streams can be analyzed in bulk
rapidly to improve the features and attractions of online games and can help Playtika react well
in an agile market.
To learn more about leveraging big data in the social casino industry, we're pleased to welcome
Jack Gudenkauf. He is Vice President of Big Data at Playtika in Santa Monica, California.
Welcome, Jack.
HPE Vertica
Community Edition
Start Your Free Trial Now
Jack Gudenkauf: Thank you. It’s great to be here.
Gardner: Tell us a little bit about Playtika. I understand that you're part of Caesars Interactive
Entertainment and that you have a number of online games, but I don't know much more than
that. What are you all about?
Gudenkauf: We have a few free-to-play social casino games. In fact, we're the industry leader.
We have maybe 10 games at this point. World Series of Poker, which you've probably heard
about, Slotomania, House of Fun, Bingo Blitz, a number of studios combined.
Worldwide, we're about 1,000 employees. As I say, we're the industry leader in this space at this
moment. And it's a very challenging space, as you might imagine, just within gaming itself. The
Gardner
amount of data is huge, especially across all of these games. Collecting information about how
the users play the game and what they like about the game, is really a completely data-driven
experience.
If we release a new feature, we get feedback. Of course, it’s social gaming as
well. If we find out that they don't like the feature, we have to rev the game
pretty quickly. It's not like the old days, where you go away for a year or so, and
come out with something that you hope people like -- Halo, or something like
that. It's more about the users driving the experience and what they enjoy.
So we'll try something with some content or something else and see if they like
this feature or functionality. If the data comes back immediately that, as they do
the slot spin and they have a new version of the game and they're clearly not playing, we literally
change the game.
In fact, in the Bingo Blitz game, we will revise the game as often as once a week, if you can
imagine that. So we have to be pretty agile. The data completely drives the user experience as
well. Do they like this, do they not like this, shall we make this game change?
Data-driven environment
It’s a complete data-driven environment. That's what brought me there. I came from Twitter,
where we used very big data, as you might imagine, with Vertica and Hadoop and such, but it
was more about volume there. Here it’s about variety, velocity, and changing game events across
all of our studios.
You can imagine the amount of data that we have to crunch through, do analytics on,
and then get user feedback. The whole intention is to get feedback
sooner so that we can change the game as rapidly as possible, so that
users are happy with the game.
So it’s completely user-driven as far as kind of the experience and what they enjoy,
which is fun and makes it challenging as well.
Gardner: So being a data scientist in this particular organization gives you a pretty important
place at a major table. It's not something to think about at the end of the month when we run
some reports. This is essential and integral to the success of the company?
Gudenkauf: Of course, we do analyze the data for daily, monthly, and general key performance
indicators (KPIs), daily active users or monthly active users, those types of things. But you're
absolutely right. With the game events themselves, we need to process the data as quickly as
possible and do the analysis. So analytics is a huge part of our processing.
Gudenkauf
We actually have a game economy as well, which is kind of fascinating. If you think of it in
terms of the US economy, you can only have so much money in the economy without having
inflation and deflation. Imagine if I won all the money and nobody else could have money to play
with. It’s kind of game over for us, because they can’t play the game anymore. So we have to
manage that quite well.
Of course, with the user experience and what they enjoy and the free to play, in particular, the
demand is pretty high. It’s like with apps that you pay for. The 99-cent apps are the ones that
people think the most about.
When somebody is spending a dollar, it's very important to them. You want the experience to be
a great experience for them. So the data-driven aspects of that and doing the analysis and
analytics of it, and feeding that back to the game is extremely important to us. The velocity and
the variety of games and different features that we have and processing that as fast as possible is
quite a challenge.
Gardner: Now, games like poker, slots, or bingo, these are games that have been around for
decades, if not hundreds of years, and they've had a new life online in the past 15 years, which is
the Dark Ages of online gaming. What's new and different about games now, even though the
game is essentially quite familiar to people? What's new and different about a social casino
game?
Social aspect
Gudenkauf: I've thought about that quite a bit. A lot of it has to do with the social aspect.
Now, you can play bingo, not just with your friends at the local club, but you can play with
people around the world.
You can share items and gifts, and if you are running low on money, maybe you can borrow
some from your friends. And you can chat with them. The social aspect just opened up all kinds
of avenues.
In our case, with our games in the studios, because they're familiar, they stand the test of time.
Take something like a bingo or slots, as opposed to some new game that people don't really
understand. They may like it. They may only like it for a while. It’s like playing Scrabble or
Monopoly with your family. It's a game that's just very familiar and something you enjoy
playing.
But, with the online and the social aspect of it, I explain it to other people as imagine Carmen
Sandiego meets bingo. You can have experiences where you're playing bingo, you go on this
journey to Egypt, and you're collecting items and exploring Egypt, trying to get to another thing.
We can take it to places that you wouldn't normally take a traditional kind of board game and in a
more social aspect.
Gardner: So this really appeals to what's conceived of as entertainment in multiple ways for an
individual. Again, as you established, the analysis and feedback loops are really important.
I understand why doing great data analysis is so important to this particular use case. Tell us a
little bit about how you pull that off. What sort of data architecture do you have? What sort of
requirements do you have? What are the biggest problems you have to overcome to achieve your
goals?
Gudenkauf: If you think about the traditional way of consuming data and getting it into a
reporting system, you have an extract. You're going to bring in data from somewhere, and of
course, in our case it’s from mobile devices, the web, from playing on Facebook. You have
information about how much money did you spend, and user behavior. Did they like it?
So you extract that data as usual, and then you transform it. You reshape it and change it around a
little bit to put it in a format to get it into a data warehouse like Vertica.
Once you get it into Vertica, you have the extract, transform, and the load (ETL), the traditional
model. You load it into Vertica and then you do your analysis there, where you can do SQL,
JOINs, and analytics over it.
A new industry term that I'm coining is what we call Parallelized Streaming Transformation
Loader (PSTL) instead of ETL. This is about ingesting data as fast as possible, processing it, and
making analytics available through the entire data pipeline, instead of just in the data warehouse.
Real-time streaming
Imagine, instead of the extract, we're taking real-time streaming data. We're reading, in our case,
off a Kafka queue. Kafka is very robust and has been used by LinkedIn and Twitter. So it’s pretty
substantial and scalable.
We read the messages in parallel as they're streaming in from all the game studios, certain
amounts of data here and there, depending on how much we do with the particular studio. With
Bingo Blitz, in our case, we consume a lot more user behavior than say some of the other
studios.
But we ingest all the data. We need to get it in in real-time streaming. So we read it in in parallel.
That’s the parallel part and the streaming part. But then we take it from the streaming, and
instead of extracting, it's being fed into us.
Then we do parallel transformations in Spark and our Hadoop cluster. Think of it as  bringing in
a bunch of JSON event data, we are putting it into an in-memory table that’s distributed in Spark.
HPE Vertica
Community Edition
Start Your Free Trial Now
Then, we do parallel transformations, meaning we can restructure the data, we can do transforms
from uppercase, lowercase, whatever we need to do. But it's done in parallel across the cluster as
well. Where, traditionally, there's a single monolithic app that was running, we could run
independent to the extract of the load.
We have so much data that we need to also do the transformations in parallel. We do that in what
are called Resilient Distributed Datasets (RDDs). It’s kind of a mouthful, but think of it as just a
bunch of slices of data across a bunch of computers and your nodes, and then doing transforms
on that in parallel. Then, something that has been a dream of mine is how to get all that data in
parallel at the same time into Vertica.
Vertica does a great job of doing massive parallel processing (MPP) and all that means is running
the query and pulling data off of different nodes in the cluster. Then, maybe you're grouping by
this and you are summing this and doing an average.
But, to date they hadn't had something that I tried to do when I was at Twitter, but managed to
pull off now, which is to load the data in parallel. While the data is in memory in Spark and
distributed datasets, we use the Vertica Hash function that will tell us exactly where the data will
land when we write it to a Vertica node.
We can say, User A, if I were to write this to Vertica, I know that it’s going to go on this machine.
User B will go to the next machine. It just distributes the load, but we, a priori, hash the data into
buckets, so that we know, when we actually write the data, that it goes to this node. Then, Vertica
doesn’t have to move it. Usually you write it to one node and it says, "No, you really belong over
here," and so it asks you to move it and shuffle, like a traditional MapReduce.
Working with Vertica
So we created something in conjunction with the Vertica developers. We announced it. That
part of it is kind of a TCP server aspect that we extend in the Copy command that exist in Vertica
itself. We literally go from streaming in parallel, reading into in-memory data structures, do the
transformations, and then write it directly from memory into our Vertica data warehouse.
That allows us to get the data in as fast as possible from streaming right to the right. We don’t
have to hit a disk along the way and we can do analytics in Vertica sooner. We can also do
analytics in Hadoop clusters for older data and do machine learning on that. We can do all kinds
of things based on historical user behavior.
If we're doing a sale or something like that, how well is it resonating compared to the past. What
we're doing is pushing the envelope to push the analytics as close as we can up to the actual
game itself.
As I said, traditionally, you do the analytics, get the feedback, change the game, release it in a
week, etc. We're going to try to push that all the way up to be as near real time as we can.
Basically, the PSTL pipeline, allows us to do that, do analytics, and tighten that loop down so
that we can get the user behavior to the user as fast as possible.
Gardner: It’s intriguing. It sounds as if you're able, with a common architecture, to do multiple
types of analysis readily but without having to reshuffle the deck chairs each time. Is that fair?
Gudenkauf: That's exactly right. That’s the beauty of this model and why I'm putting up more
prescriptive guidance around it. It changes the paradigm of the traditional way of processing
data. Once you have it in as fast as you can, reshaping it while it’s in memory, which of course is
faster, and taking advantage of doing the parallel transformations at the same time, and in the
parallel loading as well, it’s just a way more optimized solution.
We announced some benchmarking. Last year at the HPE Big Data Conference, Facebook stole
the show with 36 terabytes an hour on 270 machines. With our model, you could do it with about
80 machines. So it scales very well. Some people say, "We're not Twitter or Facebook scale, but
the speed at which we want to consume the data and make it available for analytics is extremely
important to us."
The less busy the machines are, the more you can do with them. So does it need to scale like
that? No, we are not processing as much data, but the volume, velocity, and variety is a big deal
for us. We do need to process the volume, and we do have a lot of events. The volume is not
insignificant. We're talking about billions of events, mind you. We're not on the sheer scale of say
Twitter or Facebook, but the solution will work for both, in both scenarios.
Gardner: So, Jack, with this capability analysis as close to real-time with the volume and the
variety that you are able to accomplish, while this is a great opportunity for you to react in a
gaming environment. you're also pushing the envelope on what analysis and reaction can happen
to almost any human behaviors at scale. In this case, it happens to be gaming, but there are
probably other applications for this. Have you thought about that or are there other places you
can take it within an interactive entertainment environment?
All kinds of solutions
Gudenkauf: I can imagine all kinds of solutions for it. In fact, I've had a number of people
come up to me and say, "We're doing this Chicago Stock Exchange, and we have a massive
amount of streaming-in data. This is a perfect solution for that."
I've had other people come in to talk to me about other aspects and other games as well that are
not social casino genre, but they have the same problem. So it's the traditional problem of how to
ingest data, massage it, load it, and then have analytics through that entire process. It’s applicable
really in any scenario. That’s one of the reasons I'm so excited about the PSTL model, because it
just scales extremely well along the way.
Gardner: Let’s relate this back to this particular application, which is higher entertaining games
that react, and maybe even start pushing envelope into anticipating what people will want in a
game. What’s the next step for making these types of games engaging? I'm even starting to toy
with the concept of artificial intelligence (AI), where people wouldn’t know that it’s a game.
They might not even know the difference between the game and other social participants. Are we
getting anywhere close to that?
Gudenkauf: You're thinking extremely clearly on the spectrum in analytics in general. Before, it
was just general reporting in the feedback loop, but you're absolutely right. As you can see, it’s
enabled through our model of prescriptive analytics. Looking at historical data and doing
machine learning, we can make better determinations of games and game behavior that will drive
the game based on historical knowledge or incoming data that’s more predictive analytics.
Then, as you say, maybe even into the future, beyond predictive and prescriptive analytics, we
can almost change as rapidly as possible. We know the user behavior before the user knows the
behavior. That will be a great world, and I'm sure we would be extremely successful to get to that
final spectrum. But just doing the prescriptive analytics alone, so that the user is happy with the
game, and we can get that back to them as quickly as possible, that’s big in and of itself.
Gardner: So maybe a new game some day will be Pass the Turing Test, you against our analysis
capabilities.
Gudenkauf: Yeah, that would be pretty cool. Maybe eventually it will tie into the whole virtual
reality. It’s kind of happening based on the information behaviors immediately. That will be neat.
Gardner: Very exciting world coming our way, right? We're only scratching the surface. I guess
I have run out of questions because my mind is reeling at some of these possibilities.
One last area though. For a platform like HPE Vertica, what would you like to see them do
intrinsic to the product? We have the announcement recently about the next version of Vertica,
but what might be on your list, a wish-list if you will, for what should be in the product to allow
this sort of thing to happen even more readily?
Influencing the product
Gudenkauf: That’s one of the reasons we go to conferences. It’s one of the few conferences
where you can get to the actual developers or professional services and influence the product
itself.
One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect
product development and what they are working on. I've been fortunate enough to be able to
work with developers and people internal to Vertica for quite a while now. I just love the product
and I want to see it be successful. With the adoption and their more openness of  working with
open source like Spark and MapReduce, the whole ecosystem works well together, as opposed to
opposing each other, which I think is what most people think. It’s a very collaborative,
cooperative environment especially through our pipeline.
I really like the fact that when I talk about things like Kafka and the PSTL, and that Spark is a
core part of our architecture, now we're having conversation, and lots of them, to help Vertica
and influence them to invest more in Spark and the interaction between Vertica data warehouse,
Spark, and that eco-system from Kafka.
From the part of the work that we did with Vertica over the last year with reading streaming data
from Kafka into Spark, of course, and then into Vertica, they said that  reading real-time
streaming data from Kafka directly into Vertica will be a great add-on  and they announced it.
Ben Vandiver and developers announced it.
I really want to be in a place, and this affords us to be in that place, to influence where they are
going, because it benefits all of us and the entire community. It's being able to give them
prescriptive guidance as well from the customer perspective, because this is what we're doing in
the real world, of course. They want to make us happy, and we will make them happy.
Our investments have been in things like Kafka streaming and Spark and how does Spark SQL
work with Vertica and VSQL. They don’t necessarily have to compete. There is a world for both.
So coexisting, influencing that, and having them be receptive to it is amazing. A lot of companies
aren’t very receptive to taking the feedback from us as consumers and baking that into offerings.
One of the things in our model to load the data as fast as possible in parallel is that we pre-hash
the data. If you just take user IDs, for instance, and you hash on those IDs, so that you can put
this user on this node, and this one on this one and this one, is an even distribution of data, that
wasn’t exposed in Vertica. I've been asking for it since the Twitter days for years.
So we wrote our own version of it. I managed to have the Vertica developers, which is a rare and
a great opportunity, review what we had done. They said, "Yes, that’s spot on. That’s exactly the
implementation." I said, "You know what would be even better. I've been asking for this for years
and I know you have lots of other customers. Why don’t you just make it available for everybody
to use. Then, I don’t have to use mine and everybody else can benefit from it as well.
HPE Vertica
Community Edition
Start Your Free Trial Now
They just announced that they're going to make it available. So being able to influence things like
that just helped the whole ecosystem.
Gardner: Excellent. I'm afraid we'll have to leave it there. We've been exploring how Playtika
uses big data analytics deliver captivating social game experiences and engagement for their end
users, but we've also seen that they have a tremendous amount of data science going on and an
architectural approach to conquer some of these hurdles around volume, velocity, and variety
that I think probably are applicable in many other cutting-edge applications.
So a big thank to our guest. We've been here with Jack Gudenkauf, Vice President of Big Data at
Playtika in Santa Monica, California. Thanks so much, Jack.
Gudenkauf: Thank you. It was a pleasure.
Gardner: And a big thank you to our audience as well for joining us for this big data innovation
case study discussion.
I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of
HPE  sponsored discussions. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
Transcript of a sponsored discussion on how Playtika is using data science going and an
architectural approach to conquer some of these hurdles around volume, velocity, and variety of
data. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.
You may also be interested in:
	 •	 Redmonk analysts on best navigating the tricky path to DevOps adoption
	 •	 DevOps by design--A practical guide to effectively ushering DevOps into any
organization
	 •	 Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership
Model
	 •	 HPE's composable infrastructure sets stage for hybrid market brokering role
	 •	 Nottingham Trent University Elevates Big Data's role to Improving Student Retention in
Higher Education
	 •	 Forrester analyst Kurt Bittner on the inevitability of DevOps
	 •	 Agile on fire: IT enters the new era of 'continuous' everything
	 •	 Big data enables top user experiences and extreme personalization for Intuit TurboTax
	 •	 Feedback loops: The confluence of DevOps and big data
	 •	 IoT brings on development demands that DevOps manages best, say experts
	 •	 Big data generates new insights into what’s happening in the world's tropical ecosystems
	 •	 DevOps and security, a match made in heaven
	 •	 How Sprint employs orchestration and automation to bring IT into DevOps readiness

More Related Content

Viewers also liked

How IT Innovators Turned Digital Disruption into a Business Productivity Mult...
How IT Innovators Turned Digital Disruption into a Business Productivity Mult...How IT Innovators Turned Digital Disruption into a Business Productivity Mult...
How IT Innovators Turned Digital Disruption into a Business Productivity Mult...Dana Gardner
 
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...Dana Gardner
 
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...Dana Gardner
 
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...Dana Gardner
 
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...Dana Gardner
 
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Dana Gardner
 
Meet George Jetson – Your New Chief Procurement Officer
Meet George Jetson – Your New Chief Procurement OfficerMeet George Jetson – Your New Chief Procurement Officer
Meet George Jetson – Your New Chief Procurement OfficerDana Gardner
 
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...Dana Gardner
 
Programação carnaval de olinda 2017
Programação carnaval de olinda 2017Programação carnaval de olinda 2017
Programação carnaval de olinda 2017f t
 
Competitive analysis - Nike Air Force 1
Competitive analysis - Nike Air Force 1Competitive analysis - Nike Air Force 1
Competitive analysis - Nike Air Force 1Ian Marcus
 

Viewers also liked (16)

How IT Innovators Turned Digital Disruption into a Business Productivity Mult...
How IT Innovators Turned Digital Disruption into a Business Productivity Mult...How IT Innovators Turned Digital Disruption into a Business Productivity Mult...
How IT Innovators Turned Digital Disruption into a Business Productivity Mult...
 
La buena pregunta y el libro
La buena pregunta y el libroLa buena pregunta y el libro
La buena pregunta y el libro
 
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...
How Big Data Deep Analysis and Agile SQL Querying Give 2016 Campaigners an Ed...
 
Practica 1 shirley
Practica 1 shirleyPractica 1 shirley
Practica 1 shirley
 
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...
How Governments Gain Economic Benefits from Inter-Public-Cloud Interoperabili...
 
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...
Infrastructure as Destiny — How Purdue Builds a Support Fabric for Big Data E...
 
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medici...
 
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
Gaining Digital Business Strategic View Across More Data Gives AmeriPride Cul...
 
Meet George Jetson – Your New Chief Procurement Officer
Meet George Jetson – Your New Chief Procurement OfficerMeet George Jetson – Your New Chief Procurement Officer
Meet George Jetson – Your New Chief Procurement Officer
 
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...
How Data-Driven Continuous Intelligence Benefits Aid the Development and Mana...
 
17630683
1763068317630683
17630683
 
La buena pregunta y el libro
La buena pregunta y el libroLa buena pregunta y el libro
La buena pregunta y el libro
 
Programação carnaval de olinda 2017
Programação carnaval de olinda 2017Programação carnaval de olinda 2017
Programação carnaval de olinda 2017
 
Tecnología wearables
Tecnología wearablesTecnología wearables
Tecnología wearables
 
Citas y organizadores
Citas y organizadoresCitas y organizadores
Citas y organizadores
 
Competitive analysis - Nike Air Force 1
Competitive analysis - Nike Air Force 1Competitive analysis - Nike Air Force 1
Competitive analysis - Nike Air Force 1
 

Similar to Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences and Engagement

Effective Testing of Free-to-Play Games
Effective Testing of Free-to-Play GamesEffective Testing of Free-to-Play Games
Effective Testing of Free-to-Play Gamesemily_greer
 
Massively multiplayer data challenges in mobile game analytics
Massively multiplayer data  challenges in mobile game analyticsMassively multiplayer data  challenges in mobile game analytics
Massively multiplayer data challenges in mobile game analyticsJak Marshall
 
Massively multiplayer data challenges in mobile game analytics
Massively multiplayer data  challenges in mobile game analyticsMassively multiplayer data  challenges in mobile game analytics
Massively multiplayer data challenges in mobile game analyticsJak Marshall
 
SXSW Interactive 2011
SXSW Interactive 2011SXSW Interactive 2011
SXSW Interactive 2011Zach Klein
 
Luke Hohmann's Software Guru 2009 Keynote: Innovation In Software
Luke Hohmann's Software Guru 2009 Keynote: Innovation In SoftwareLuke Hohmann's Software Guru 2009 Keynote: Innovation In Software
Luke Hohmann's Software Guru 2009 Keynote: Innovation In SoftwareEnthiosys Inc
 
5 Things we Can Learn from Games About UX
5 Things we Can Learn from Games About UX5 Things we Can Learn from Games About UX
5 Things we Can Learn from Games About UXDori Adar
 
Game Jam Junkies - Casual Connect SF
Game Jam Junkies - Casual Connect SFGame Jam Junkies - Casual Connect SF
Game Jam Junkies - Casual Connect SFDave Bisceglia
 
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...Dana Gardner
 
5 steps to speed up your game design
5 steps to speed up your game design5 steps to speed up your game design
5 steps to speed up your game designQualitasGlobal
 
100 k users in the first month. How did we do?
100 k users in the first month. How did we do?100 k users in the first month. How did we do?
100 k users in the first month. How did we do?Jorge Galindo Cruces
 
Crowdsourcing Wisdom
Crowdsourcing WisdomCrowdsourcing Wisdom
Crowdsourcing WisdomVantte
 
Usability Testing
Usability TestingUsability Testing
Usability TestingAndy Budd
 
Datatium - using data as a material for contextually responsive design.
Datatium - using data as a material for contextually responsive design.Datatium - using data as a material for contextually responsive design.
Datatium - using data as a material for contextually responsive design.Andrew Fisher
 
Jane McGonigal on the Future of Mobile Gaming
Jane McGonigal on the Future of Mobile GamingJane McGonigal on the Future of Mobile Gaming
Jane McGonigal on the Future of Mobile GamingJane McGonigal
 
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...Lviv Startup Club
 
danmcclary-pspresentation-katieboyle-171030115522.pdf
danmcclary-pspresentation-katieboyle-171030115522.pdfdanmcclary-pspresentation-katieboyle-171030115522.pdf
danmcclary-pspresentation-katieboyle-171030115522.pdfssuser3ee399
 
Why Big and Small Data Is Important by Google's Product Manager
Why Big and Small Data Is Important by Google's Product ManagerWhy Big and Small Data Is Important by Google's Product Manager
Why Big and Small Data Is Important by Google's Product ManagerProduct School
 
Front Porch Keynote 2014
Front Porch Keynote 2014Front Porch Keynote 2014
Front Porch Keynote 2014amboy00
 
Make Your UX Ideas Stick
Make Your UX Ideas StickMake Your UX Ideas Stick
Make Your UX Ideas StickJohn H Douglass
 
Social wizz rapid fire with guest tom chatfield
Social wizz rapid fire with guest tom chatfieldSocial wizz rapid fire with guest tom chatfield
Social wizz rapid fire with guest tom chatfieldPraz Hari
 

Similar to Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences and Engagement (20)

Effective Testing of Free-to-Play Games
Effective Testing of Free-to-Play GamesEffective Testing of Free-to-Play Games
Effective Testing of Free-to-Play Games
 
Massively multiplayer data challenges in mobile game analytics
Massively multiplayer data  challenges in mobile game analyticsMassively multiplayer data  challenges in mobile game analytics
Massively multiplayer data challenges in mobile game analytics
 
Massively multiplayer data challenges in mobile game analytics
Massively multiplayer data  challenges in mobile game analyticsMassively multiplayer data  challenges in mobile game analytics
Massively multiplayer data challenges in mobile game analytics
 
SXSW Interactive 2011
SXSW Interactive 2011SXSW Interactive 2011
SXSW Interactive 2011
 
Luke Hohmann's Software Guru 2009 Keynote: Innovation In Software
Luke Hohmann's Software Guru 2009 Keynote: Innovation In SoftwareLuke Hohmann's Software Guru 2009 Keynote: Innovation In Software
Luke Hohmann's Software Guru 2009 Keynote: Innovation In Software
 
5 Things we Can Learn from Games About UX
5 Things we Can Learn from Games About UX5 Things we Can Learn from Games About UX
5 Things we Can Learn from Games About UX
 
Game Jam Junkies - Casual Connect SF
Game Jam Junkies - Casual Connect SFGame Jam Junkies - Casual Connect SF
Game Jam Junkies - Casual Connect SF
 
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...
Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Cho...
 
5 steps to speed up your game design
5 steps to speed up your game design5 steps to speed up your game design
5 steps to speed up your game design
 
100 k users in the first month. How did we do?
100 k users in the first month. How did we do?100 k users in the first month. How did we do?
100 k users in the first month. How did we do?
 
Crowdsourcing Wisdom
Crowdsourcing WisdomCrowdsourcing Wisdom
Crowdsourcing Wisdom
 
Usability Testing
Usability TestingUsability Testing
Usability Testing
 
Datatium - using data as a material for contextually responsive design.
Datatium - using data as a material for contextually responsive design.Datatium - using data as a material for contextually responsive design.
Datatium - using data as a material for contextually responsive design.
 
Jane McGonigal on the Future of Mobile Gaming
Jane McGonigal on the Future of Mobile GamingJane McGonigal on the Future of Mobile Gaming
Jane McGonigal on the Future of Mobile Gaming
 
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...
Олександр Штаченко "Бенчмаркінг – досліджуємо ринок до початку розробки" Game...
 
danmcclary-pspresentation-katieboyle-171030115522.pdf
danmcclary-pspresentation-katieboyle-171030115522.pdfdanmcclary-pspresentation-katieboyle-171030115522.pdf
danmcclary-pspresentation-katieboyle-171030115522.pdf
 
Why Big and Small Data Is Important by Google's Product Manager
Why Big and Small Data Is Important by Google's Product ManagerWhy Big and Small Data Is Important by Google's Product Manager
Why Big and Small Data Is Important by Google's Product Manager
 
Front Porch Keynote 2014
Front Porch Keynote 2014Front Porch Keynote 2014
Front Porch Keynote 2014
 
Make Your UX Ideas Stick
Make Your UX Ideas StickMake Your UX Ideas Stick
Make Your UX Ideas Stick
 
Social wizz rapid fire with guest tom chatfield
Social wizz rapid fire with guest tom chatfieldSocial wizz rapid fire with guest tom chatfield
Social wizz rapid fire with guest tom chatfield
 

Recently uploaded

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 

Recently uploaded (20)

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 

Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences and Engagement

  • 1. Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences and Engagement Transcript of a sponsored discussion on how Playtika is using data science going and an architectural approach to conquer some of these hurdles around volume, velocity, and variety of data. Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett Packard Enterprise. Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives. Our next big-data case study discussion explores how social gaming company Playtika uses big-data analytics to deliver captivating user experiences and engagement. We'll learn how feedback from user action streams can be analyzed in bulk rapidly to improve the features and attractions of online games and can help Playtika react well in an agile market. To learn more about leveraging big data in the social casino industry, we're pleased to welcome Jack Gudenkauf. He is Vice President of Big Data at Playtika in Santa Monica, California. Welcome, Jack. HPE Vertica Community Edition Start Your Free Trial Now Jack Gudenkauf: Thank you. It’s great to be here. Gardner: Tell us a little bit about Playtika. I understand that you're part of Caesars Interactive Entertainment and that you have a number of online games, but I don't know much more than that. What are you all about? Gudenkauf: We have a few free-to-play social casino games. In fact, we're the industry leader. We have maybe 10 games at this point. World Series of Poker, which you've probably heard about, Slotomania, House of Fun, Bingo Blitz, a number of studios combined. Worldwide, we're about 1,000 employees. As I say, we're the industry leader in this space at this moment. And it's a very challenging space, as you might imagine, just within gaming itself. The Gardner
  • 2. amount of data is huge, especially across all of these games. Collecting information about how the users play the game and what they like about the game, is really a completely data-driven experience. If we release a new feature, we get feedback. Of course, it’s social gaming as well. If we find out that they don't like the feature, we have to rev the game pretty quickly. It's not like the old days, where you go away for a year or so, and come out with something that you hope people like -- Halo, or something like that. It's more about the users driving the experience and what they enjoy. So we'll try something with some content or something else and see if they like this feature or functionality. If the data comes back immediately that, as they do the slot spin and they have a new version of the game and they're clearly not playing, we literally change the game. In fact, in the Bingo Blitz game, we will revise the game as often as once a week, if you can imagine that. So we have to be pretty agile. The data completely drives the user experience as well. Do they like this, do they not like this, shall we make this game change? Data-driven environment It’s a complete data-driven environment. That's what brought me there. I came from Twitter, where we used very big data, as you might imagine, with Vertica and Hadoop and such, but it was more about volume there. Here it’s about variety, velocity, and changing game events across all of our studios. You can imagine the amount of data that we have to crunch through, do analytics on, and then get user feedback. The whole intention is to get feedback sooner so that we can change the game as rapidly as possible, so that users are happy with the game. So it’s completely user-driven as far as kind of the experience and what they enjoy, which is fun and makes it challenging as well. Gardner: So being a data scientist in this particular organization gives you a pretty important place at a major table. It's not something to think about at the end of the month when we run some reports. This is essential and integral to the success of the company? Gudenkauf: Of course, we do analyze the data for daily, monthly, and general key performance indicators (KPIs), daily active users or monthly active users, those types of things. But you're absolutely right. With the game events themselves, we need to process the data as quickly as possible and do the analysis. So analytics is a huge part of our processing. Gudenkauf
  • 3. We actually have a game economy as well, which is kind of fascinating. If you think of it in terms of the US economy, you can only have so much money in the economy without having inflation and deflation. Imagine if I won all the money and nobody else could have money to play with. It’s kind of game over for us, because they can’t play the game anymore. So we have to manage that quite well. Of course, with the user experience and what they enjoy and the free to play, in particular, the demand is pretty high. It’s like with apps that you pay for. The 99-cent apps are the ones that people think the most about. When somebody is spending a dollar, it's very important to them. You want the experience to be a great experience for them. So the data-driven aspects of that and doing the analysis and analytics of it, and feeding that back to the game is extremely important to us. The velocity and the variety of games and different features that we have and processing that as fast as possible is quite a challenge. Gardner: Now, games like poker, slots, or bingo, these are games that have been around for decades, if not hundreds of years, and they've had a new life online in the past 15 years, which is the Dark Ages of online gaming. What's new and different about games now, even though the game is essentially quite familiar to people? What's new and different about a social casino game? Social aspect Gudenkauf: I've thought about that quite a bit. A lot of it has to do with the social aspect. Now, you can play bingo, not just with your friends at the local club, but you can play with people around the world. You can share items and gifts, and if you are running low on money, maybe you can borrow some from your friends. And you can chat with them. The social aspect just opened up all kinds of avenues. In our case, with our games in the studios, because they're familiar, they stand the test of time. Take something like a bingo or slots, as opposed to some new game that people don't really understand. They may like it. They may only like it for a while. It’s like playing Scrabble or Monopoly with your family. It's a game that's just very familiar and something you enjoy playing. But, with the online and the social aspect of it, I explain it to other people as imagine Carmen Sandiego meets bingo. You can have experiences where you're playing bingo, you go on this journey to Egypt, and you're collecting items and exploring Egypt, trying to get to another thing. We can take it to places that you wouldn't normally take a traditional kind of board game and in a more social aspect.
  • 4. Gardner: So this really appeals to what's conceived of as entertainment in multiple ways for an individual. Again, as you established, the analysis and feedback loops are really important. I understand why doing great data analysis is so important to this particular use case. Tell us a little bit about how you pull that off. What sort of data architecture do you have? What sort of requirements do you have? What are the biggest problems you have to overcome to achieve your goals? Gudenkauf: If you think about the traditional way of consuming data and getting it into a reporting system, you have an extract. You're going to bring in data from somewhere, and of course, in our case it’s from mobile devices, the web, from playing on Facebook. You have information about how much money did you spend, and user behavior. Did they like it? So you extract that data as usual, and then you transform it. You reshape it and change it around a little bit to put it in a format to get it into a data warehouse like Vertica. Once you get it into Vertica, you have the extract, transform, and the load (ETL), the traditional model. You load it into Vertica and then you do your analysis there, where you can do SQL, JOINs, and analytics over it. A new industry term that I'm coining is what we call Parallelized Streaming Transformation Loader (PSTL) instead of ETL. This is about ingesting data as fast as possible, processing it, and making analytics available through the entire data pipeline, instead of just in the data warehouse. Real-time streaming Imagine, instead of the extract, we're taking real-time streaming data. We're reading, in our case, off a Kafka queue. Kafka is very robust and has been used by LinkedIn and Twitter. So it’s pretty substantial and scalable. We read the messages in parallel as they're streaming in from all the game studios, certain amounts of data here and there, depending on how much we do with the particular studio. With Bingo Blitz, in our case, we consume a lot more user behavior than say some of the other studios. But we ingest all the data. We need to get it in in real-time streaming. So we read it in in parallel. That’s the parallel part and the streaming part. But then we take it from the streaming, and instead of extracting, it's being fed into us. Then we do parallel transformations in Spark and our Hadoop cluster. Think of it as  bringing in a bunch of JSON event data, we are putting it into an in-memory table that’s distributed in Spark.
  • 5. HPE Vertica Community Edition Start Your Free Trial Now Then, we do parallel transformations, meaning we can restructure the data, we can do transforms from uppercase, lowercase, whatever we need to do. But it's done in parallel across the cluster as well. Where, traditionally, there's a single monolithic app that was running, we could run independent to the extract of the load. We have so much data that we need to also do the transformations in parallel. We do that in what are called Resilient Distributed Datasets (RDDs). It’s kind of a mouthful, but think of it as just a bunch of slices of data across a bunch of computers and your nodes, and then doing transforms on that in parallel. Then, something that has been a dream of mine is how to get all that data in parallel at the same time into Vertica. Vertica does a great job of doing massive parallel processing (MPP) and all that means is running the query and pulling data off of different nodes in the cluster. Then, maybe you're grouping by this and you are summing this and doing an average. But, to date they hadn't had something that I tried to do when I was at Twitter, but managed to pull off now, which is to load the data in parallel. While the data is in memory in Spark and distributed datasets, we use the Vertica Hash function that will tell us exactly where the data will land when we write it to a Vertica node. We can say, User A, if I were to write this to Vertica, I know that it’s going to go on this machine. User B will go to the next machine. It just distributes the load, but we, a priori, hash the data into buckets, so that we know, when we actually write the data, that it goes to this node. Then, Vertica doesn’t have to move it. Usually you write it to one node and it says, "No, you really belong over here," and so it asks you to move it and shuffle, like a traditional MapReduce. Working with Vertica So we created something in conjunction with the Vertica developers. We announced it. That part of it is kind of a TCP server aspect that we extend in the Copy command that exist in Vertica itself. We literally go from streaming in parallel, reading into in-memory data structures, do the transformations, and then write it directly from memory into our Vertica data warehouse. That allows us to get the data in as fast as possible from streaming right to the right. We don’t have to hit a disk along the way and we can do analytics in Vertica sooner. We can also do analytics in Hadoop clusters for older data and do machine learning on that. We can do all kinds of things based on historical user behavior.
  • 6. If we're doing a sale or something like that, how well is it resonating compared to the past. What we're doing is pushing the envelope to push the analytics as close as we can up to the actual game itself. As I said, traditionally, you do the analytics, get the feedback, change the game, release it in a week, etc. We're going to try to push that all the way up to be as near real time as we can. Basically, the PSTL pipeline, allows us to do that, do analytics, and tighten that loop down so that we can get the user behavior to the user as fast as possible. Gardner: It’s intriguing. It sounds as if you're able, with a common architecture, to do multiple types of analysis readily but without having to reshuffle the deck chairs each time. Is that fair? Gudenkauf: That's exactly right. That’s the beauty of this model and why I'm putting up more prescriptive guidance around it. It changes the paradigm of the traditional way of processing data. Once you have it in as fast as you can, reshaping it while it’s in memory, which of course is faster, and taking advantage of doing the parallel transformations at the same time, and in the parallel loading as well, it’s just a way more optimized solution. We announced some benchmarking. Last year at the HPE Big Data Conference, Facebook stole the show with 36 terabytes an hour on 270 machines. With our model, you could do it with about 80 machines. So it scales very well. Some people say, "We're not Twitter or Facebook scale, but the speed at which we want to consume the data and make it available for analytics is extremely important to us." The less busy the machines are, the more you can do with them. So does it need to scale like that? No, we are not processing as much data, but the volume, velocity, and variety is a big deal for us. We do need to process the volume, and we do have a lot of events. The volume is not insignificant. We're talking about billions of events, mind you. We're not on the sheer scale of say Twitter or Facebook, but the solution will work for both, in both scenarios. Gardner: So, Jack, with this capability analysis as close to real-time with the volume and the variety that you are able to accomplish, while this is a great opportunity for you to react in a gaming environment. you're also pushing the envelope on what analysis and reaction can happen to almost any human behaviors at scale. In this case, it happens to be gaming, but there are probably other applications for this. Have you thought about that or are there other places you can take it within an interactive entertainment environment? All kinds of solutions Gudenkauf: I can imagine all kinds of solutions for it. In fact, I've had a number of people come up to me and say, "We're doing this Chicago Stock Exchange, and we have a massive amount of streaming-in data. This is a perfect solution for that."
  • 7. I've had other people come in to talk to me about other aspects and other games as well that are not social casino genre, but they have the same problem. So it's the traditional problem of how to ingest data, massage it, load it, and then have analytics through that entire process. It’s applicable really in any scenario. That’s one of the reasons I'm so excited about the PSTL model, because it just scales extremely well along the way. Gardner: Let’s relate this back to this particular application, which is higher entertaining games that react, and maybe even start pushing envelope into anticipating what people will want in a game. What’s the next step for making these types of games engaging? I'm even starting to toy with the concept of artificial intelligence (AI), where people wouldn’t know that it’s a game. They might not even know the difference between the game and other social participants. Are we getting anywhere close to that? Gudenkauf: You're thinking extremely clearly on the spectrum in analytics in general. Before, it was just general reporting in the feedback loop, but you're absolutely right. As you can see, it’s enabled through our model of prescriptive analytics. Looking at historical data and doing machine learning, we can make better determinations of games and game behavior that will drive the game based on historical knowledge or incoming data that’s more predictive analytics. Then, as you say, maybe even into the future, beyond predictive and prescriptive analytics, we can almost change as rapidly as possible. We know the user behavior before the user knows the behavior. That will be a great world, and I'm sure we would be extremely successful to get to that final spectrum. But just doing the prescriptive analytics alone, so that the user is happy with the game, and we can get that back to them as quickly as possible, that’s big in and of itself. Gardner: So maybe a new game some day will be Pass the Turing Test, you against our analysis capabilities. Gudenkauf: Yeah, that would be pretty cool. Maybe eventually it will tie into the whole virtual reality. It’s kind of happening based on the information behaviors immediately. That will be neat. Gardner: Very exciting world coming our way, right? We're only scratching the surface. I guess I have run out of questions because my mind is reeling at some of these possibilities. One last area though. For a platform like HPE Vertica, what would you like to see them do intrinsic to the product? We have the announcement recently about the next version of Vertica, but what might be on your list, a wish-list if you will, for what should be in the product to allow this sort of thing to happen even more readily? Influencing the product Gudenkauf: That’s one of the reasons we go to conferences. It’s one of the few conferences where you can get to the actual developers or professional services and influence the product itself.
  • 8. One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect product development and what they are working on. I've been fortunate enough to be able to work with developers and people internal to Vertica for quite a while now. I just love the product and I want to see it be successful. With the adoption and their more openness of  working with open source like Spark and MapReduce, the whole ecosystem works well together, as opposed to opposing each other, which I think is what most people think. It’s a very collaborative, cooperative environment especially through our pipeline. I really like the fact that when I talk about things like Kafka and the PSTL, and that Spark is a core part of our architecture, now we're having conversation, and lots of them, to help Vertica and influence them to invest more in Spark and the interaction between Vertica data warehouse, Spark, and that eco-system from Kafka. From the part of the work that we did with Vertica over the last year with reading streaming data from Kafka into Spark, of course, and then into Vertica, they said that  reading real-time streaming data from Kafka directly into Vertica will be a great add-on  and they announced it. Ben Vandiver and developers announced it. I really want to be in a place, and this affords us to be in that place, to influence where they are going, because it benefits all of us and the entire community. It's being able to give them prescriptive guidance as well from the customer perspective, because this is what we're doing in the real world, of course. They want to make us happy, and we will make them happy. Our investments have been in things like Kafka streaming and Spark and how does Spark SQL work with Vertica and VSQL. They don’t necessarily have to compete. There is a world for both. So coexisting, influencing that, and having them be receptive to it is amazing. A lot of companies aren’t very receptive to taking the feedback from us as consumers and baking that into offerings. One of the things in our model to load the data as fast as possible in parallel is that we pre-hash the data. If you just take user IDs, for instance, and you hash on those IDs, so that you can put this user on this node, and this one on this one and this one, is an even distribution of data, that wasn’t exposed in Vertica. I've been asking for it since the Twitter days for years. So we wrote our own version of it. I managed to have the Vertica developers, which is a rare and a great opportunity, review what we had done. They said, "Yes, that’s spot on. That’s exactly the implementation." I said, "You know what would be even better. I've been asking for this for years and I know you have lots of other customers. Why don’t you just make it available for everybody to use. Then, I don’t have to use mine and everybody else can benefit from it as well. HPE Vertica Community Edition Start Your Free Trial Now They just announced that they're going to make it available. So being able to influence things like that just helped the whole ecosystem.
  • 9. Gardner: Excellent. I'm afraid we'll have to leave it there. We've been exploring how Playtika uses big data analytics deliver captivating social game experiences and engagement for their end users, but we've also seen that they have a tremendous amount of data science going on and an architectural approach to conquer some of these hurdles around volume, velocity, and variety that I think probably are applicable in many other cutting-edge applications. So a big thank to our guest. We've been here with Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. Thanks so much, Jack. Gudenkauf: Thank you. It was a pleasure. Gardner: And a big thank you to our audience as well for joining us for this big data innovation case study discussion. I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE  sponsored discussions. Thanks again for listening, and come back next time. Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett Packard Enterprise. Transcript of a sponsored discussion on how Playtika is using data science going and an architectural approach to conquer some of these hurdles around volume, velocity, and variety of data. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved. You may also be interested in: • Redmonk analysts on best navigating the tricky path to DevOps adoption • DevOps by design--A practical guide to effectively ushering DevOps into any organization • Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model • HPE's composable infrastructure sets stage for hybrid market brokering role • Nottingham Trent University Elevates Big Data's role to Improving Student Retention in Higher Education • Forrester analyst Kurt Bittner on the inevitability of DevOps • Agile on fire: IT enters the new era of 'continuous' everything • Big data enables top user experiences and extreme personalization for Intuit TurboTax • Feedback loops: The confluence of DevOps and big data • IoT brings on development demands that DevOps manages best, say experts • Big data generates new insights into what’s happening in the world's tropical ecosystems • DevOps and security, a match made in heaven • How Sprint employs orchestration and automation to bring IT into DevOps readiness