Viki collects clickstream data from clients and stores it in Hadoop and S3. They retrieve and process the data using PostgreSQL. The data requires extensive cleaning and transforming to standardize fields and properties before analysis. Viki builds query reports and dashboards to visualize the data and provide insights to stakeholders. They manage dependencies between jobs using Azkaban and present findings through daily reports and an interactive data explorer tool.
An experiment in a distributed approach to processing the real-time data generated by a large scale social media campaign. Presented at Cambridge Geek Nights 13.
Goal Based Data Production with Sim SimeonovDatabricks
Since the invention of SQL and relational databases, data production has been about specifying how data should be transformed through queries. While Apache Spark can certainly be used as a general distributed SQL-like query engine, the power and granularity of Spark’s APIs allows for a fundamentally different, and far more productive, approach. This session will introduce the principles of goal-based data production with examples ranging from ETL, to exploratory data analysis, to feature engineering for machine learning.
Goal-based data production concerns itself with specifying WHAT the desired result is, leaving the details of HOW the result is achieved to the smart data warehouse running on top of Spark. That not only substantially increases productivity, but also significantly expands the audience that can work directly with Spark: from developers and data scientists to technical business users. With specific data and architecture patterns and live demos, this session will demonstrate how easy it is for any company to create its own smart data warehouse with Spark 2.x and gain the benefits of goal-based data production.
CDR-Stats : VoIP Analytics Solution for Asterisk and FreeSWITCH with MongoDBAreski Belaid
CDR-Stats is a free and open source call detail record analysis and reporting software for Freeswitch, Asterisk and other types of VoIP Switch. It allows you to interrogate CDR to provide reports and statistics via a simple to use powerful web interface.
It is based on the Django Python Framework, Celery, SocketIO, Gevent and MongoDB.
An experiment in a distributed approach to processing the real-time data generated by a large scale social media campaign. Presented at Cambridge Geek Nights 13.
Goal Based Data Production with Sim SimeonovDatabricks
Since the invention of SQL and relational databases, data production has been about specifying how data should be transformed through queries. While Apache Spark can certainly be used as a general distributed SQL-like query engine, the power and granularity of Spark’s APIs allows for a fundamentally different, and far more productive, approach. This session will introduce the principles of goal-based data production with examples ranging from ETL, to exploratory data analysis, to feature engineering for machine learning.
Goal-based data production concerns itself with specifying WHAT the desired result is, leaving the details of HOW the result is achieved to the smart data warehouse running on top of Spark. That not only substantially increases productivity, but also significantly expands the audience that can work directly with Spark: from developers and data scientists to technical business users. With specific data and architecture patterns and live demos, this session will demonstrate how easy it is for any company to create its own smart data warehouse with Spark 2.x and gain the benefits of goal-based data production.
CDR-Stats : VoIP Analytics Solution for Asterisk and FreeSWITCH with MongoDBAreski Belaid
CDR-Stats is a free and open source call detail record analysis and reporting software for Freeswitch, Asterisk and other types of VoIP Switch. It allows you to interrogate CDR to provide reports and statistics via a simple to use powerful web interface.
It is based on the Django Python Framework, Celery, SocketIO, Gevent and MongoDB.
How experiments drive product growth at Vikiishanagrawal90
We extensively experiment with all aspects of product at Viki. These experiments take the form of AB testing, cohort analysis and various customisations. We collect and analyse the data on how these experiments affect user behaviour and other metrics. We are always in a continuous product improvement cycle heavily influenced by these experiments and corresponding data. We have developed our own custom Experimentation Framework called Turing for this purpose. In this talk I will explain how we carry out these experiments at Viki. I will also talk about Turing, why and how we built it.
Local or Bust! Google Local and all Things Links WCMKE 2014Rachel Fredrickson
Google My Business - It’s free, easy, and literally takes less than 10 minutes to set up your business on Google Search, Google Maps, and Google+. The world of local search is only growing and can help your business get found by searchers faster than they would have.
This presentation will walk you through the basics of Google My Business, local citations and links and their importance to your local visibility.
Annual workshop for high school and community college librarians in the LA area. Includes demo & discussion of uses of Wikipedia, other 'pedias, YouTube, and other Tubes in information literacy instruction.
Wikipedia, the encylopedia that anyone can edit, “can never work in theory, only in practice.” Accounting for one in every 200 page views on the Internet, it has become a part of our everyday lives. Wikipedia is changing the way we think about the economics of the web, the potential and the pitfalls of engaging the masses, and the role of professional information architects in a world in which content arrives from literally every direction.
In this session, we’ll explore the nuts-and-bolts of how the Wikipedia project works. Who writes Wikipedia, and why? How does the English Wikipedia maintain quality, consistent tagging, and coherent organization across over two million articles? What happens when contributors disagree? We will take a tour behind the scenes at Wikipedia to learn what happens when users are encouraged to - as they say on Wikipedia… “be bold.”
Communities of Practice: Conversations To CollaborationCollabor8now Ltd
What makes a successful Community of Practice?
This presentation looks at the key ingredients, with particular emphasis on the role of the community facilitator for building trust and cooperation, enabling conversations to become active collaboration and co-production.
A Presentation About Community, By The CommunityNeil Perkin
A crowdsourced presentation about how online communities work with contributions from 30 planners, strategists, digital specialists and some of the most reknowned thinkers in social media strategy.
OSA Con 2022 - Building Event Collection SDKs and Data Models - Paul Boocock ...Altinity Ltd
OSA Con 2022: Building Event Collection SDKs and Data Models
Paul Boocock - Snowplow
In this talk we'll go through how we have designed and built over 20 different SDKs to collect events from all sorts of applications (from web & mobile to IoT to server-side), allowing users to collect a rich event stream of data. Then we'll dive into, and demonstrate, the cross-warehouse downstream data models which aggregate the event stream into easy-to-consume data products for analytics, AI, composable CDP, recommendation engines, and many other use cases.
Neo4j GraphSummit Copenhagen - The path to success with Graph Database and Gr...Neo4j
What’s new and what’s next? Product innovation moves rapidly at Neo4j – learn how graph technology can provide you with the tools to get much more from your data!
Powering Heap With PostgreSQL And CitusDB (PGConf Silicon Valley 2015)Dan Robinson
At Heap, we lean on PostgreSQL for all our backend heavy lifting. We support an expressive set of queries — conversion funnels with filtering and grouping, retention analysis, and behavioral cohorting to name a few — across billions of users and tens of billions of events. Results need to come back in a matter of seconds and reflect up-to-the-minute data.
This talk will discuss these challenges, with a particular focus on:
- Using CitusDB for interactive analysis across 50 terabytes of data and counting.
- PostgreSQL and Kafka: two great tastes that taste great together.
- UDFs in C and PL/pgSQL, partial indexes for pre-aggregation, and other tricks up our sleeves.
Patterns and Practices for Event Design With Adam Bellemare | Current 2022HostedbyConfluent
Patterns and Practices for Event Design With Adam Bellemare | Current 2022
Events are the fundamental component of every streaming architecture, and how you implement them will hugely impact your event-driven architectures. Despite the wide range of materials on event-driven architectures and the importance of event modeling, this critical domain is often left as an exercise for you to implement on your own. Improperly modeling your events can have difficult and costly impacts on not only your event consumers but on the teams and systems that produce them as well.
In this talk, Adam covers the main considerations of modeling and implementing events. Data is often modeled as a Fact or a Delta, though the distinction isn't always clear.
For one, facts are commonly used in the event-carried state transfer pattern, while deltas are commonly used in event sourcing. But when communicating across domain boundaries, which ones should you choose? What are the tradeoffs, the benefits, and the best use-cases for each? Adam digs into these main event types, providing some examples and guidelines for when to use each.
Adam closes out the presentation with an opinionated list of best practices. Do you think naming is tricky? What about versioning? Evolving your data model got you down? Torn between multiple event types per stream and multiple streams per event? Adam's has a host of best practices, well-reasoned examples, and practical tips to help you model and implement your events and streams.
Connecting Your Customers – Building Successful Mobile Games through the Powe...Amazon Web Services
Free to play is now the standard for mobile and social games. But succeeding in free-to-play is not easy: You need in-depth data analytics to gain insight into your players so you can monetize your game. Learn how to leverage new features of AWS services such as Elastic MapReduce, Amazon S3, Kinesis, and Redshift to build an end-to-end analytics pipeline. Plus, we’ll show you how to easily integrate analytics with other AWS services in your game.
Most AWS APIs will have limits on the amount of data you can send in one request and sometimes you really need to send a lot of data! To try to maximise the amount of data you can send, while still staying within the limits, some APIs support sending gzip-compressed payloads. But how can you send a gzipped request when using the Python SDK for AWS (boto3)? Well, I needed to answer this question recently and it turned out not to be as easy as I anticipated… Let’s jump into this rabbit hole together and let’s find out the answer!
How experiments drive product growth at Vikiishanagrawal90
We extensively experiment with all aspects of product at Viki. These experiments take the form of AB testing, cohort analysis and various customisations. We collect and analyse the data on how these experiments affect user behaviour and other metrics. We are always in a continuous product improvement cycle heavily influenced by these experiments and corresponding data. We have developed our own custom Experimentation Framework called Turing for this purpose. In this talk I will explain how we carry out these experiments at Viki. I will also talk about Turing, why and how we built it.
Local or Bust! Google Local and all Things Links WCMKE 2014Rachel Fredrickson
Google My Business - It’s free, easy, and literally takes less than 10 minutes to set up your business on Google Search, Google Maps, and Google+. The world of local search is only growing and can help your business get found by searchers faster than they would have.
This presentation will walk you through the basics of Google My Business, local citations and links and their importance to your local visibility.
Annual workshop for high school and community college librarians in the LA area. Includes demo & discussion of uses of Wikipedia, other 'pedias, YouTube, and other Tubes in information literacy instruction.
Wikipedia, the encylopedia that anyone can edit, “can never work in theory, only in practice.” Accounting for one in every 200 page views on the Internet, it has become a part of our everyday lives. Wikipedia is changing the way we think about the economics of the web, the potential and the pitfalls of engaging the masses, and the role of professional information architects in a world in which content arrives from literally every direction.
In this session, we’ll explore the nuts-and-bolts of how the Wikipedia project works. Who writes Wikipedia, and why? How does the English Wikipedia maintain quality, consistent tagging, and coherent organization across over two million articles? What happens when contributors disagree? We will take a tour behind the scenes at Wikipedia to learn what happens when users are encouraged to - as they say on Wikipedia… “be bold.”
Communities of Practice: Conversations To CollaborationCollabor8now Ltd
What makes a successful Community of Practice?
This presentation looks at the key ingredients, with particular emphasis on the role of the community facilitator for building trust and cooperation, enabling conversations to become active collaboration and co-production.
A Presentation About Community, By The CommunityNeil Perkin
A crowdsourced presentation about how online communities work with contributions from 30 planners, strategists, digital specialists and some of the most reknowned thinkers in social media strategy.
OSA Con 2022 - Building Event Collection SDKs and Data Models - Paul Boocock ...Altinity Ltd
OSA Con 2022: Building Event Collection SDKs and Data Models
Paul Boocock - Snowplow
In this talk we'll go through how we have designed and built over 20 different SDKs to collect events from all sorts of applications (from web & mobile to IoT to server-side), allowing users to collect a rich event stream of data. Then we'll dive into, and demonstrate, the cross-warehouse downstream data models which aggregate the event stream into easy-to-consume data products for analytics, AI, composable CDP, recommendation engines, and many other use cases.
Neo4j GraphSummit Copenhagen - The path to success with Graph Database and Gr...Neo4j
What’s new and what’s next? Product innovation moves rapidly at Neo4j – learn how graph technology can provide you with the tools to get much more from your data!
Powering Heap With PostgreSQL And CitusDB (PGConf Silicon Valley 2015)Dan Robinson
At Heap, we lean on PostgreSQL for all our backend heavy lifting. We support an expressive set of queries — conversion funnels with filtering and grouping, retention analysis, and behavioral cohorting to name a few — across billions of users and tens of billions of events. Results need to come back in a matter of seconds and reflect up-to-the-minute data.
This talk will discuss these challenges, with a particular focus on:
- Using CitusDB for interactive analysis across 50 terabytes of data and counting.
- PostgreSQL and Kafka: two great tastes that taste great together.
- UDFs in C and PL/pgSQL, partial indexes for pre-aggregation, and other tricks up our sleeves.
Patterns and Practices for Event Design With Adam Bellemare | Current 2022HostedbyConfluent
Patterns and Practices for Event Design With Adam Bellemare | Current 2022
Events are the fundamental component of every streaming architecture, and how you implement them will hugely impact your event-driven architectures. Despite the wide range of materials on event-driven architectures and the importance of event modeling, this critical domain is often left as an exercise for you to implement on your own. Improperly modeling your events can have difficult and costly impacts on not only your event consumers but on the teams and systems that produce them as well.
In this talk, Adam covers the main considerations of modeling and implementing events. Data is often modeled as a Fact or a Delta, though the distinction isn't always clear.
For one, facts are commonly used in the event-carried state transfer pattern, while deltas are commonly used in event sourcing. But when communicating across domain boundaries, which ones should you choose? What are the tradeoffs, the benefits, and the best use-cases for each? Adam digs into these main event types, providing some examples and guidelines for when to use each.
Adam closes out the presentation with an opinionated list of best practices. Do you think naming is tricky? What about versioning? Evolving your data model got you down? Torn between multiple event types per stream and multiple streams per event? Adam's has a host of best practices, well-reasoned examples, and practical tips to help you model and implement your events and streams.
Connecting Your Customers – Building Successful Mobile Games through the Powe...Amazon Web Services
Free to play is now the standard for mobile and social games. But succeeding in free-to-play is not easy: You need in-depth data analytics to gain insight into your players so you can monetize your game. Learn how to leverage new features of AWS services such as Elastic MapReduce, Amazon S3, Kinesis, and Redshift to build an end-to-end analytics pipeline. Plus, we’ll show you how to easily integrate analytics with other AWS services in your game.
Most AWS APIs will have limits on the amount of data you can send in one request and sometimes you really need to send a lot of data! To try to maximise the amount of data you can send, while still staying within the limits, some APIs support sending gzip-compressed payloads. But how can you send a gzipped request when using the Python SDK for AWS (boto3)? Well, I needed to answer this question recently and it turned out not to be as easy as I anticipated… Let’s jump into this rabbit hole together and let’s find out the answer!
HTML5 is all the rage with the cool kids, and although there’s a lot of focus on the new language, there’s plenty for web app developers with new JavaScript APIs both in the HTML5 spec and separated out as their own W3C specifications. This session will take you through demos and code and show off some of the outright crazy bleeding edge demos that are being produced today using the new JavaScript APIs. But it’s not all pie in the sky – plenty is useful today, some even in Internet Explorer!
GDC 2015 - Game Analytics with AWS Redshift, Kinesis, and the Mobile SDKNate Wiger
See the latest analytics architectures for companies succeeding in the free-to-play space, such as Supercell, GREE, and Rovio. Also see how to create a real-time analytics pipeline to connect to your players, enabling you to deliver deeper experiences.
This presentation discusses how WSO2 used complex event processing (CEP) and MapReduce based technologies to track and process data from a soccer match as part of the annual DEBS event processing challenge while achieving throughput in excess of 100,000 events/sec.
Big data streams, Internet of Things, and Complex Event Processing Improve So...Chris Haddad
Teams gain a competitive edge by analyzing Big Data streams. In this session, Chris will describe how complex event processing (CEP) and MapReduce based technologies can improve soccer team performance. Soccer match activity data captured by embedded sensors were streamed and analyzed to understand how player actions impact soccer play.
Creating a Rest API is simple these days. To be able to change your API and still keep it working for all consumers is much harder. When using Consumer Contracts, you can let your API evolve and verify you can still uphold the contract of your consumers. In this presentation I will be demonstrating this by using Spring Cloud Contract.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
4. What data do we collect?
• Clickstream data
• An event is some user interaction or product
related
• A client (web/mobile) sends these events as
HTTP calls.
• Format: JSON
– Schema-less
– Flexible
{"origin":"tv_show_show", "app_ver":"2.9.3.151",
"uuid":"80833c5a760597bf1c8339819636df04",
"user_id":"5298933u", "vs_id":"1008912v-1380452660-
7920", "app_id":"100004a”,
"event":”video_play","timed_comment":"off”,
"stream_quality":"variable”, "bottom_subtitle":"en",
"device_size":"tablet", "feature":"auto_play",
"video_id":"1008912v",
”subtitle_completion_percent":"100",
"device_id":"iPad2,1|6.1.3|apple", "t":"1380452846",
"ip":"99.232.169.246”, "country":"ca",
"city_name":"Toronto”, "region_name":"ON"}
…
5. How to keep this data clean?
• Problem: Clients often send erroneous data.
eg. missing parameter
• Solution: We write client
libraries for each client to
enforce “world peace”
Ps: there is no such thing as
“world peace”
6. How to collect > 60 M
events a day?
• fluentd
Scalable
Extensibility
Let you send data to
Hadoop, MongoDB, PostgreSQL etc.
• Writes to Hadoop (TD), Amazon
S3, MongoDB
7. Where do we store?
• Hadoop (Treasure Data)
Its fast and easy to setup!
We don’t have money or time to hire a
Hadoop engineer.
We retrieve data from Hadoop in batch
jobs
• Amazon S3
Backup
• MongoDB: Real-time data
9. 2. Retrieving & Processing Data
• Centralizing All Data Sources
• Cleaning Data
• Transforming Data
• Managing Job Dependencies
10. 2. Retrieving & Processing Data
• Centralizing All Data Sources
• Cleaning Data
• Transforming Data
• Managing Job Dependencies
11. Getting All Data To 1 Place
• Port data from different
production databases into PG
• Retrieve click-stream data
from Hadoop to PG
thor db:cp --source prod1 --destination analytics -t public.* --force-schema prod1
a) Production Databases Analytics DB:
thor db:cp --source A --destination B –t reporting.video_plays --increment
PostgreSQL
12. {"origin":"tv_show_show", "app_ver":"2.9.3.151",
"uuid":"80833c5a760597bf1c8339819636df04", "user_id":"5298933u",
"vs_id":"1008912v-1380452660-7920", "app_id":"100004a”,
"event":”video_play","timed_comment":"off”, "stream_quality":"variable”,
"bottom_subtitle":"en", "device_size":"tablet", "feature":"auto_play",
"video_id":"1008912v", ”subtitle_completion_percent":"100",
"device_id":"iPad2,1|6.1.3|apple", "t":"1380452846", "ip":"99.232.169.246”,
"country":"ca", "city_name":"Toronto”, "region_name":"ON"}
…
date source partner event video_id country cnt
2013-09-29 ios viki video_play 1008912v ca 2
2013-09-29 android viki video_play 1008912v us 18
…
b) Click-stream Data (Hadoop) Analytics DB:
Hadoop
PostgreSQL
Aggregation (Hive)
Export Output / Sqoop
13. SELECT
SUBSTR( FROM_UNIXTIME( time ) ,0 ,10 ) AS `date_d`,
v['source'],
v['partner'],
v['event'],
v['video_id'],
v['country'],
COUNT(1) as cnt
FROM events
WHERE TIME_RANGE(time, '2013-09-29', '2013-09-30')
AND v['event'] = 'video_play'
GROUP BY
SUBSTR( FROM_UNIXTIME( time ) ,0 ,10 ),
v['source'],
v['partner'],
v['event'],
v['video_id'],
v['country'];
Simple Aggregation SQL
14. The Data Is Not Clean!
Event properties and names change as we
develop:
But…
{"user_id": "152u”, "country": "sg" }
{"user_id": "152", "country_code":"sg" }Old Version:
New Version:
15. SELECT SUBSTR( FROM_UNIXTIME( time ) ,0 ,10 ) AS `date_d`,
v['app_id'] AS `app_id`,
CASE WHEN v['app_ver'] LIKE '%_ax' THEN 'axis'
WHEN v['app_ver'] LIKE '%_kd' THEN 'amazon'
WHEN v['app_ver'] LIKE '%_kf' THEN 'amazon'
WHEN v['app_ver'] LIKE '%_lv' THEN 'lenovo'
WHEN v['app_ver'] LIKE '%_nx' THEN 'nexian'
WHEN v['app_ver'] LIKE '%_sf' THEN 'smartfren'
WHEN v['app_ver'] LIKE '%_vp' THEN 'samsung_viki_premiere'
ELSE LOWER( v['partner'] )
END AS `partner`,
CASE WHEN ( v['app_id'] = '65535a' AND ( v['site'] IN ( 'www.viki.com' ,'viki.com' ,'www.viki.mx' ,'viki.mx' ,'' ) ) ) THEN 'direct'
WHEN ( v['event'] = 'pv' OR v['app_id'] = '100000a' ) THEN 'direct'
WHEN ( v['app_id'] = '65535a' AND v['site'] NOT IN ( 'www.viki.com' ,'viki.com' ,'www.viki.mx' ,'viki.mx' ,'' ) ) THEN 'embed'
WHEN ( v['source'] = 'mobile' AND v['os'] = 'android' ) THEN 'android'
WHEN ( v['source'] = 'mobile' AND v['device_id'] LIKE '%|apple' ) THEN 'ios'
ELSE TRIM( v['source'] )
END AS `source` ,
LOWER( CASE WHEN LENGTH( TRIM( COALESCE ( v['country'] ,v['country_code'] ) ) ) = 2
THEN TRIM( COALESCE ( v['country'] ,v['country_code'] ) )
ELSE NULL END ) AS `country` ,
COALESCE ( v['device_size'] ,v['device'] ) AS `device`,
COUNT( 1 ) AS `cnt`
FROM events
WHERE time >= 1380326400 AND time <= 1380412799
AND v['event'] = 'video_play'
GROUP BY
SUBSTR( FROM_UNIXTIME( time ) ,0 ,10 ), v['app_id'],
CASE WHEN v['app_ver'] LIKE '%_ax'
THEN 'axis' WHEN v['app_ver'] LIKE '%_kd'
THEN 'amazon' WHEN v['app_ver'] LIKE '%_kf'
THEN 'amazon' WHEN v['app_ver'] LIKE '%_lv'
THEN 'lenovo' WHEN v['app_ver'] LIKE '%_nx'
THEN 'nexian' WHEN v['app_ver'] LIKE '%_sf'
THEN 'smartfren' WHEN v['app_ver'] LIKE '%_vp'
THEN 'samsung_viki_premiere'
ELSE LOWER( v['partner'] )
END ,
CASE WHEN ( v['app_id'] = '65535a' AND ( v['site'] IN ( 'www.viki.com' ,'viki.com' ,'www.viki.mx' ,'viki.mx' ,'' ) ) ) THEN 'direct'
WHEN ( v['event'] = 'pv' OR v['app_id'] = '100000a' ) THEN 'direct'
WHEN ( v['app_id'] = '65535a' AND v['site'] NOT IN ( 'www.viki.com' ,'viki.com' ,'www.viki.mx' ,'viki.mx' ,'' ) )
THEN 'embed' WHEN ( v['source'] = 'mobile' AND v['os'] = 'android' ) THEN 'android'
WHEN ( v['source'] = 'mobile' AND v['device_id'] LIKE '%|apple' ) THEN 'ios'
ELSE TRIM( v['source'] ) END,
LOWER( CASE WHEN LENGTH( TRIM( COALESCE ( v['country'] ,v['country_code'] ) ) ) = 2
THEN TRIM( COALESCE ( v['country'] ,v['country_code'] ) )
ELSE NULL END ),
COALESCE ( v['device_size'] ,v['device'] );
(Not so) simple Aggregation SQL
Hadoop
16. UPDATE "reporting"."cl_main_2013_09"
SET source = 'embed', partner = ’partner1'
WHERE app_id = '100105a' AND (source != 'embed' OR partner != ’partner1')
UPDATE "reporting"."cl_main_2013_09"
SET app_id = '100105a'
WHERE (source = 'embed' AND partner = ’partner1') AND (app_id != '100105a')
UPDATE reporting.cl_main_2013_09
SET user_id = user_id || 'u’
WHERE RIGHT(user_id, 1) ~ '[0-9]’
UPDATE "reporting"."cl_main_2013_09"
SET app_id = '100106a'
WHERE (source = 'embed' AND partner = ’partner2') AND (app_id != '100106a')
UPDATE reporting.cl_main_2013_09
SET source = 'raynor', partner = 'viki', app_id = '100000a’
WHERE event = 'pv’
AND source IS NULL
AND partner IS NULL
AND app_id IS NULL
…even after import
PostgreSQL
20. date source partner event country cnt
2013-09-29 ios viki video_play ca 20
…
date source partner event video_id country cnt
2013-09-29 ios viki video_play 1v ca 2
2013-09-29 ios viki video_play 2v ca 18
…
PostgreSQL
20M records
4M records
a) Reducing Table Size By Dropping Dimension
video_plays_with_video_id
video_plays
21. id title
1c Game of Thrones
2c My Girlfriend Is A
Gumiho
…
PostgreSQL
b) Injecting Extra Fields For Analysis
id title video_count
1c Game of
Thrones
30
2c My Girlfriend
Is A Gumiho
16
…
containers videos
containers containers
1 n
22. id title
1c Game of Thrones
2c My Girlfriend Is A
Gumiho
…
PostgreSQL
Injecting Extra Fields For Analysis
id title video_count
1c Game of
Thrones
30
2c My Girlfriend
Is A Gumiho
16
…
containers videos
containers containers
1 n
23. Chunk Tables By Month
video_plays_2013_06
video_plays_2013_07
video_plays_2013_08
video_plays_2013_09
…
ALTER TABLE video_plays_2013_09 INHERIT
video_plays;
ALTER TABLE video_plays_2013_09
ADD CONSTRAINT CHECK
date >= '2013-09-01'
AND date < '2013-10-01';
video_plays (parent table)
24. Managing Job Dependency
• Centralizing All Data Sources
• Cleaning Data
• Transforming Data
• Managing Job Dependencies
30. Dashboard
• Yes, dashboard on Rails.
• We have a daily logship process to port the data over to
dashboard server.
thor db:logship –t big_table
31. Data Visualization
Tableau is slow if directly working on
PostgreSQL
Export compressed csv’s to tableau server
Windows
Line charts do solve most problems
32. Engineering involvement in report
creation
• Bad idea!
• Enter Query Reports!
Fast report churn rate
“Give me six hours to chop down a tree and I
will spend the first four sharpening the axe” –
Abraham Lincoln
39. Lessons Learnt
• Line charts can solve most problems
• Chart your data quickly
• Our dataset is not that big
40. Simple DIY Suggestion
• Put QueryReports on top of your database. Or Tableau
Desktop.
• Use Mixpanel/KISSMetrics for Product Analytics
• fluentd writes data to Postgres (hstore)
CAN
Hey I am ishan and this is Huy. We are data engineers at Viki.I want to start by saying that we love the big data community, and would like to thank John for organizing this and giving us an opportunity to share about the infrastructure that we have built at Viki in the past one year.
We want to break it down in simple steps and walk you through the process that we went through while building it.
It’s a bit like picking trashYou need to know what you wantYou don’t want to collect everything, but you also don’t want to leave out anything important
Add example of an event JSON
Errors in reportingHumans are prone to error
We collect over 60 million events a day! To put things in perspective, if you put one sheep on a football ground for each event that we get, that would be a lot of sheep to be hanging out on a football field!700 events a secondWhyhadoop?It allows unstructured dataWrite hive queries to easily retrieve it
We don’t have money or time to hire a Hadoop engineer. Not even now.Reason: its an easy way to store semi structured data and easily query using Hive (Sql-like) Capped collection in mongodb for real-time reporting
Centralizing All Data SourcesData CleanlinessData TransformationManaging Job Dependencies
Centralizing All Data SourcesData CleanlinessData TransformationManaging Job Dependencies
Centralizing All Data SourcesData CleanlinessData TransformationManaging Job Dependencies
To effectively run queries on our data, we need to bring all the data into the same database. In this case we choose Postgres since all our databases are already in Postgres.Anyone here knowsPostgres? It’s like mysql, but it’s better. We’ve built command line tools to copy tables from database to database. So the following command copies all tables in public schema of gaia database to our analytics database, and give them a separate schema. In PG, what schema means is something like namespace for tables.
Take a look at 1 sample event being stored in Hadoop insemi-structured JSON form, you have a video play event for that video id running on an ipad device, coming from an autoplay feature, from Toronto, Ontario, Canada. That’s a hell lot of dimensions. We want to aggregate and select a subset of dimensions to port into PG.The Hadoop Provider we uses (Treasure Data) has a feature that allows you to specify a destination data storage (in this case Postgres), it’ll execute the Hadoop job and write the results into the selected database. It’s the equivalent of using Sqoop to bulk export data into Postgres.
As we develop, our data changes, we make mistake, we forgot to set a variable somewhere, we change our data structure. So the new data gets mixed up with the old data. And to make meaningful, and the simple query becomes not so simple.
Cleaning up the data takes a lot of time, both in processing time and actual human work.But it’s absolutely necessary, since when writing your SQL query to analyze, you just want to focus on your query logic, you don’t want to handle different data values.
Centralizing All Data SourcesData CleanlinessData TransformationManaging Job Dependencies
Once all our data are in Postgres, we start to perform transformation/aggregation to them, depending on various different purposes.
For example, to reduce the size of the table to serve them on a web UI front-end, we aggregate the data further. In this example, we’re dropping the video_id dimension, thus grouping the 2 records together as a new record with the cnt field total to 20.And that reduces the table size.
For example, to reduce the size of the table to serve them on a web UI front-end, we aggregate the data further. In this example, we’re dropping the video_id dimension, thus grouping the 2 records together as a new record with the cnt field total to 20.And that reduces the table size.
For example, to reduce the size of the table to serve them on a web UI front-end, we aggregate the data further. In this example, we’re dropping the video_id dimension, thus grouping the 2 records together as a new record with the cnt field total to 20.And that reduces the table size.
We also chunk our data tables by month, so that when the new month comes, you don’t touch the old months’ data.This also reduce the index size and make it easier to archive your old data.When we first implemented this, we didn’t know how to query cross-month, so we have to write complicated query (like UNION), sometimes we even have to load the data into memory and process them.But then we found out out this awesome feature in Postgres called Table Inheritance. It lets you define a parent table with a bunch of children. And you just need to query the parent table, and depending on your query, it’ll find out the correct children tables to hit.
Centralizing All Data SourcesData CleanlinessData TransformationManaging Job Dependencies
Can anyone tell me what this means? Ok no one can. That’s exactly my point.At some point, our daily job workflow grew so complicated that it’s becoming hard to use crontab to manage them.
Can anyone tell me what this means? Ok no one can. That’s exactly my point.At some point, our daily job workflow grew so complicated that it’s becoming hard to use crontab to manage them.
We cant do complex visualizations in dashboard
We don’t completely exploit the potential of Tableau, but we do have some rather complicated reports running in tableau.
Query reports increased our report churn rate quite a lot!A lot of requests from management.Tableau was too complicated and slow for a fast report churn out timeCurrent reporting process, to something approaching tableauRails app requires us to make changes for report creationProcess an analyst goes through
Add another slide! (with drop downs)
There is too many reports! I want to see the high level metrics all in one place
Enabling the product and business folks to “write” their own queries
A fun side project, where you see what our viewers are watching.We use the mongoDB capped collection for this.
Collecting Data:fluentdWrite to Hadoop, MongoDBProcessing Data: PostgreSQLPresenting Data:Query Report, Summary Report, Data ExplorerTableau
As they say, you can get away with almost anything on the internet as long as you put a cat picture next to it