Hear my talk on using Redis as a time series database, presented at Nextspace in Culver City, and sponsored by Redis Labs.
Audio: https://soundcloud.com/josiah-carlson-180844738/2016-03-29-redis-talk-josiah-carlson-at-nextspace-in-culver-city
How can you use PosgreSQL as a schemaless (NoSQL) database? Here we cover our use case and highlight upcoming features in postgres 9.4 and its integration with Django 1.7
From Trill to Quill: Pushing the Envelope of Functionality and ScaleBadrish Chandramouli
In this talk, I overview Trill, describe two projects that expand Trill's functionality, and describe Quill, a new multi-node offline analytics system I have been working on at MSR.
How can you use PosgreSQL as a schemaless (NoSQL) database? Here we cover our use case and highlight upcoming features in postgres 9.4 and its integration with Django 1.7
From Trill to Quill: Pushing the Envelope of Functionality and ScaleBadrish Chandramouli
In this talk, I overview Trill, describe two projects that expand Trill's functionality, and describe Quill, a new multi-node offline analytics system I have been working on at MSR.
Lambda architecture on Spark, Kafka for real-time large scale MLhuguk
Sean Owen – Director of Data Science @Cloudera
Building machine learning models is all well and good, but how do they get productionized into a service? It's a long way from a Python script on a laptop, to a fault-tolerant system that learns continuously, serves thousands of queries per second, and scales to terabytes. The confederation of open source technologies we know as Hadoop now offers data scientists the raw materials from which to assemble an answer: the means to build models but also ingest data and serve queries, at scale.
This short talk will introduce Oryx 2, a blueprint for building this type of service on Hadoop technologies. It will survey the problem and the standard technologies and ideas that Oryx 2 combines: Apache Spark, Kafka, HDFS, the lambda architecture, PMML, REST APIs. The talk will touch on a key use case for this architecture -- recommendation engines.
Build a Geospatial App with Redis 3.2- Andrew Bass, Sean Yesmunt, Sergio Prad...Redis Labs
We created an app to find nearby running partners, and to demonstrate Redis Data structures and functions. In this talk, we will review the data structures and walk through our NodeJS app that depends solely on Redis Geospatial Indexes. Functions demoed are GEOADD, ZREM, GEOHASH, GEOPOS, GEODIST, GEORADIUS, GEORADIUSBYMEMBER
Get more than a cache back! The Microsoft Azure Redis Cache (NDC Oslo)Maarten Balliauw
Serving up content on the Internet is something our web sites do daily. But are we doing this in the fastest way possible? How are users in faraway countries experiencing our apps? Why do we have three webservers serving the same content over and over again? In this session, we’ll explore the Azure Content Delivery Network or CDN, a service which makes it easy to serve up blobs, videos and other content from servers close to our users. We’ll explore simple file serving as well as some more advanced, dynamic edge caching scenarios.
A list of all URLs in the deck is at: https://gist.github.com/itamarhaber/87e8c8c7126fbfb3f722
A lightening talk filled to the brim with knowledge and tips about Redis, data structures, performance, RAM and tips to take Redis to the max
Redis is an advanced key-value store or a data structure server. This presentation will cover the following topics:
* An overview of Redis
* Data Structures
* Basics of Setup and Installation
* Basics of Administration
* Programming with Redis
* Considerations of Running Redis in a Virtual Machine
* Redis Resources There will be a number of demonstrations to help explain some of the concepts being presented.
RespClient - Minimal Redis Client for PowerShellYoshifumi Kawai
RespClient is a minimal RESP(REdis Serialization Protocol) client for C# and PowerShell.
https://github.com/neuecc/RespClient
at Japan PowerShell User Group #3
#jpposh
HIgh Performance Redis- Tague Griffith, GoProRedis Labs
High Performance Redis looks at a wide range of techniques - from programming to system tuning - to deploy and maintain an extremely high performing Redis cluster. From the operational
perspective, the talk lays out multiple techniques for clustering (sharding) Redis systems and examines how the different
approaches impact performance time. The talk further examines system settings (Linux network parameters, Redis
system) and how they impact performance (both good and bad). Finally, for the developer, we look at how different ways of structuring data actually demonstrate different performance characteristics
Using Redis as Distributed Cache for ASP.NET apps - Peter Kellner, 73rd Stre...Redis Labs
I will build from scratch in this session a Microsoft ASP.NET website that caches WebAPI REST calls with both MSOpenTech’s Redis implementation for running while developing in Visual Studio as well as running on a Windows server running IIS. I will show you how to build a safe reusable caching library in c# that can be used in any .net project. I will also demonstrate how to use the Redis cache services that are available on Microsoft’s Azure cloud platform. Further, I’ll demonstrate a real world web site that uses Azure Redis cache and show statistics on how Redis improves performance consistently and reliably.
Scalable Streaming Data Pipelines with RedisAvram Lyon
Slides for talk presented at LA Redis meetup, April 16, 2016 at Scopely.
This is a draft of a session to be presented at Redis Conference 2016.
Description:
Scopely's portfolio of social and mid-core games generates billions of events each day, covering everything from in-game actions to advertising to game engine performance. As this portfolio grew in the past two years, Scopely moved all event analysis from third-party hosted solutions to a new event analytics pipeline on top of Redis and Kinesis, dramatically reducing operating costs and enabling new real-time analysis and more efficient warehousing. Our solution receives events over HTTP and SQS and provides real-time aggregation using a custom Redis-backed application, as well as prompt loads into HDFS for batch analyses.
Recently, we migrated our realtime layer from a pure Redis datastore to a hybrid datastore with recent data in Redis and older data in DynamoDB, retaining performance while further reducing costs. In this session we will describe our experience building, tuning and monitoring this pipeline, and the role of Redis in supporting handling of Kinesis worker failover, deployment, and idempotence, in addition to its more visible role in data aggregation. This session is intended be helpful for those building streaming data systems and looking for solutions for aggregation and idempotence.
As a data scientist I frequently need to create web apps to provide interactive functionality, deliver data APIs or simply publish results. It is now easier than ever to deploy your data driven web app by using cloud based application platforms to do the heavy lifting. Cloud Foundry (http://cloudfoundry.org) is an open source public and private cloud platform that enables simple app deployment, scaling and connectivity to data services like PostgreSQL, MongoDB, Redis and Cassandra.
Resources: http://www.ianhuston.net/2015/01/cloud-foundry-for-data-science-talk/
Redis & MongoDB: Stop Big Data Indigestion Before It StartsItamar Haber
Efficiently digesting data in large volumes can prove to be challenging for any database. The challenges are compounded when this influx must be analyzed on the fly, or "tasted", to satisfy the sophisticated palates of modern apps. Luckily, there are several proven remedies you can concoct with Redis to help with potential indigestion.
The URLs from the presentation are also available at: https://gist.github.com/itamarhaber/325e515c1715a12ef132
Back your App with MySQL & Redis, the Cloud Foundry Way- Kenny Bastani, PivotalRedis Labs
In this session, we will build a minimum viable Spring Data web service with REST API, add a MySQL backing service as the primary data store, and a Redis Labs backing service for caching. We will demonstrate performance metrics without Redis caching enabled and then with Redis caching enabled. I will also provide an intro-level explanation of the platform capabilities within Pivotal Web Services.
Lambda architecture on Spark, Kafka for real-time large scale MLhuguk
Sean Owen – Director of Data Science @Cloudera
Building machine learning models is all well and good, but how do they get productionized into a service? It's a long way from a Python script on a laptop, to a fault-tolerant system that learns continuously, serves thousands of queries per second, and scales to terabytes. The confederation of open source technologies we know as Hadoop now offers data scientists the raw materials from which to assemble an answer: the means to build models but also ingest data and serve queries, at scale.
This short talk will introduce Oryx 2, a blueprint for building this type of service on Hadoop technologies. It will survey the problem and the standard technologies and ideas that Oryx 2 combines: Apache Spark, Kafka, HDFS, the lambda architecture, PMML, REST APIs. The talk will touch on a key use case for this architecture -- recommendation engines.
Build a Geospatial App with Redis 3.2- Andrew Bass, Sean Yesmunt, Sergio Prad...Redis Labs
We created an app to find nearby running partners, and to demonstrate Redis Data structures and functions. In this talk, we will review the data structures and walk through our NodeJS app that depends solely on Redis Geospatial Indexes. Functions demoed are GEOADD, ZREM, GEOHASH, GEOPOS, GEODIST, GEORADIUS, GEORADIUSBYMEMBER
Get more than a cache back! The Microsoft Azure Redis Cache (NDC Oslo)Maarten Balliauw
Serving up content on the Internet is something our web sites do daily. But are we doing this in the fastest way possible? How are users in faraway countries experiencing our apps? Why do we have three webservers serving the same content over and over again? In this session, we’ll explore the Azure Content Delivery Network or CDN, a service which makes it easy to serve up blobs, videos and other content from servers close to our users. We’ll explore simple file serving as well as some more advanced, dynamic edge caching scenarios.
A list of all URLs in the deck is at: https://gist.github.com/itamarhaber/87e8c8c7126fbfb3f722
A lightening talk filled to the brim with knowledge and tips about Redis, data structures, performance, RAM and tips to take Redis to the max
Redis is an advanced key-value store or a data structure server. This presentation will cover the following topics:
* An overview of Redis
* Data Structures
* Basics of Setup and Installation
* Basics of Administration
* Programming with Redis
* Considerations of Running Redis in a Virtual Machine
* Redis Resources There will be a number of demonstrations to help explain some of the concepts being presented.
RespClient - Minimal Redis Client for PowerShellYoshifumi Kawai
RespClient is a minimal RESP(REdis Serialization Protocol) client for C# and PowerShell.
https://github.com/neuecc/RespClient
at Japan PowerShell User Group #3
#jpposh
HIgh Performance Redis- Tague Griffith, GoProRedis Labs
High Performance Redis looks at a wide range of techniques - from programming to system tuning - to deploy and maintain an extremely high performing Redis cluster. From the operational
perspective, the talk lays out multiple techniques for clustering (sharding) Redis systems and examines how the different
approaches impact performance time. The talk further examines system settings (Linux network parameters, Redis
system) and how they impact performance (both good and bad). Finally, for the developer, we look at how different ways of structuring data actually demonstrate different performance characteristics
Using Redis as Distributed Cache for ASP.NET apps - Peter Kellner, 73rd Stre...Redis Labs
I will build from scratch in this session a Microsoft ASP.NET website that caches WebAPI REST calls with both MSOpenTech’s Redis implementation for running while developing in Visual Studio as well as running on a Windows server running IIS. I will show you how to build a safe reusable caching library in c# that can be used in any .net project. I will also demonstrate how to use the Redis cache services that are available on Microsoft’s Azure cloud platform. Further, I’ll demonstrate a real world web site that uses Azure Redis cache and show statistics on how Redis improves performance consistently and reliably.
Scalable Streaming Data Pipelines with RedisAvram Lyon
Slides for talk presented at LA Redis meetup, April 16, 2016 at Scopely.
This is a draft of a session to be presented at Redis Conference 2016.
Description:
Scopely's portfolio of social and mid-core games generates billions of events each day, covering everything from in-game actions to advertising to game engine performance. As this portfolio grew in the past two years, Scopely moved all event analysis from third-party hosted solutions to a new event analytics pipeline on top of Redis and Kinesis, dramatically reducing operating costs and enabling new real-time analysis and more efficient warehousing. Our solution receives events over HTTP and SQS and provides real-time aggregation using a custom Redis-backed application, as well as prompt loads into HDFS for batch analyses.
Recently, we migrated our realtime layer from a pure Redis datastore to a hybrid datastore with recent data in Redis and older data in DynamoDB, retaining performance while further reducing costs. In this session we will describe our experience building, tuning and monitoring this pipeline, and the role of Redis in supporting handling of Kinesis worker failover, deployment, and idempotence, in addition to its more visible role in data aggregation. This session is intended be helpful for those building streaming data systems and looking for solutions for aggregation and idempotence.
As a data scientist I frequently need to create web apps to provide interactive functionality, deliver data APIs or simply publish results. It is now easier than ever to deploy your data driven web app by using cloud based application platforms to do the heavy lifting. Cloud Foundry (http://cloudfoundry.org) is an open source public and private cloud platform that enables simple app deployment, scaling and connectivity to data services like PostgreSQL, MongoDB, Redis and Cassandra.
Resources: http://www.ianhuston.net/2015/01/cloud-foundry-for-data-science-talk/
Redis & MongoDB: Stop Big Data Indigestion Before It StartsItamar Haber
Efficiently digesting data in large volumes can prove to be challenging for any database. The challenges are compounded when this influx must be analyzed on the fly, or "tasted", to satisfy the sophisticated palates of modern apps. Luckily, there are several proven remedies you can concoct with Redis to help with potential indigestion.
The URLs from the presentation are also available at: https://gist.github.com/itamarhaber/325e515c1715a12ef132
Back your App with MySQL & Redis, the Cloud Foundry Way- Kenny Bastani, PivotalRedis Labs
In this session, we will build a minimum viable Spring Data web service with REST API, add a MySQL backing service as the primary data store, and a Redis Labs backing service for caching. We will demonstrate performance metrics without Redis caching enabled and then with Redis caching enabled. I will also provide an intro-level explanation of the platform capabilities within Pivotal Web Services.
Security Monitoring for big Infrastructures without a Million Dollar budgetJuan Berner
Nowadays in an increasingly more complex and dynamic network its not enough to be a regex ninja and storing only the logs you think you might need. From network traffic to custom logs you won't know which logs will be crucial to stop the next attacker, and if you are not planning to spend a half of your security budget in a commercial solution we will show you a way to building you own SIEM with open source. The talk will go from how to build a powerful logging environment for your organization to scaling on the cloud and storing everything forever. We will walk through how to build such a system with open source solutions as Elasticsearch and Hadoop, and creating your own custom monitoring rules to monitor everything you need. The talk will also include how to secure the environment and allow restricted access to other teams as well as avoiding common pitfalls and ensuring compliance standards.
Eko10 - Security Monitoring for Big Infrastructures without a Million Dollar ...Hernan Costante
Nowadays in an increasingly more complex and dynamic network its not enough to be a regex ninja and storing only the logs you think you might need. From network traffic to custom logs you won't know which logs will be crucial to stop the next attacker, and if you are not planning to spend a half of your security budget in a commercial solution we will show you a way to building you own SIEM with open source. The talk will go from how to build a powerful logging environment for your organization to scaling on the cloud and storing everything forever. We will walk through how to build such a system with open source solutions as Elasticsearch and Hadoop, and creating your own custom monitoring rules to monitor everything you need. The talk will also include how to secure the environment and allow restricted access to other teams as well as avoiding common pitfalls and ensuring compliance standards.
Introduction to InfluxDB, an Open Source Distributed Time Series Database by ...Hakka Labs
In this presentation, Paul introduces InfluxDB, a distributed time series database that he open sourced based on the backend infrastructure at Errplane. He talks about why you'd want a database specifically for time series and he covers the API and some of the key features of InfluxDB, including:
• Stores metrics (like Graphite) and events (like page views, exceptions, deploys)
• No external dependencies (self contained binary)
• Fast. Handles many thousands of writes per second on a single node
• HTTP API for reading and writing data
• SQL-like query language
• Distributed to scale out to many machines
• Built in aggregate and statistics functions
• Built in downsampling
Monitoring Big Data Systems - "The Simple Way"Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Demi Ben-Ari is a Co-Founder and CTO @ Panorays.
Demi has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Describing himself as a software development groupie, Interested in tackling cutting edge technologies.
Demi is also a co-founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-intro/
In recent years, many companies have adopted service-oriented architectures by deploying tens to hundreds of small microservices. But with the increasing number of independent services, do you still know what’s going on in your infrastructure?
Traditional monitoring solutions were mostly focused on machines and fell short keeping track of infrastructures where service deployments happen multiple times per day and instances get dynamically allocated on a multitude of nodes. Prometheus is a relatively new monitoring system which has gained a lot of popularity in the last two years as it was explicitly designed for today’s needs of service monitoring and container infrastructure.
In this session, you’ll learn how to instrument a service with a Prometheus client library to provide information about its current health and state. In order to get automatically notified when the service becomes unhealthy, you’ll see how to configure alerts and notifications. Along the way, I’ll discuss a few important key metrics paramount to successfully monitor a microservice.
Talking about Neo4j after 1 year of using it production. This presentation covering db structure(internals), cypher queries, extensions development, db tuning & settings.
Why PostgreSQL for Analytics Infrastructure (DW)?Huy Nguyen
Talk given at Grokking TechTalk 14 - Database Systems
Why starting out with Postgres for reporting/analytics database.
Huy Nguyen
CTO of Holistics.io - BI & Infrastructure SaaS.
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
Logging and ranting / Vytis Valentinavičius (Lamoda)Ontico
HighLoad++ 2017
Зал «Пекин+Шанхай», 7 ноября, 16:00
Тезисы:
http://www.highload.ru/2017/abstracts/2842.html
A story about real life experience in Lamoda, featuring logging, forest animals, limited size buffers and morning routines.
Possible takeaways from this presentation:
1. Understanding the need of central log aggregation
2. Learning a few tips about logging and event aggregation
3. Saving a lot of money by implementing your own personal "poor-man's" NewRelic
...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...NETWAYS
At Uber we use high cardinality monitoring to observe and detect issues with our 4,000 microservices running on Mesos and across our infrastructure systems and servers. We’ll cover how we put the resulting 6 billion plus time series to work in a variety of different ways, auto-discovering services and their usage of other systems at Uber, setting up and tearing down alerts automatically for services, sending smart alert notifications that rollup different failures into individual high level contextual alerts, and more. We’ll also talk about how we accomplish all this with a global view of our systems with M3, our open source metrics platform. We’ll take a deep dive look at how we use M3DB, now available as an open source Prometheus long term storage backend, to horizontally scale our metrics platform in a cost efficient manner with a system that’s still sane to operate with petabytes of metrics data.
Speakers: Chris Larsen (Limelight Networks) and Benoit Sigoure (Arista Networks)
The OpenTSDB community continues to grow and with users looking to store massive amounts of time-series data in a scalable manner. In this talk, we will discuss a number of use cases and best practices around naming schemas and HBase configuration. We will also review OpenTSDB 2.0's new features, including the HTTP API, plugins, annotations, millisecond support, and metadata, as well as what's next in the roadmap.
Managing your Black Friday Logs - Antonio Bonuccelli - Codemotion Rome 2018Codemotion
Monitoring an entire application is not a simple task, but with the right tools it is not a hard task either. However, events like Black Friday can push your application to the limit, and even cause crashes. As the system is stressed, it generates a lot more logs, which may crash the monitoring system as well. In this talk I will walk through the best practices when using the Elastic Stack to centralize and monitor your logs. I will also share some tricks to help you with the huge increase of traffic typical in Black Fridays.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
March 29, 2016 Dr. Josiah Carlson talks about using Redis as a Time Series DB
1. Redis as a Time Series DB
Josiah Carlson - @dr_josiah
2. Agenda
● Who are you?
● What is Redis? (3 minutes, optional)
● What is a time series database?
● Combining structures for success
● Analyzing/segmenting events
● Next steps
● Questions
3. Who are you?
● A guy who does a lot of stuff with Redis and Python
○ Book:
■ Redis in Action http://manning.com/carlson
■ Read for free http://bit.ly/ria-free
○ Libraries: https://github.com/josiahcarlson
■ rom, RPQueue, lua_call, parse-crontab, …
○ Mailing list: https://groups.google.com/forum/#!forum/redis-db (not active here recently)
○ Blog: dr-josiah.com
○ Twitter: @dr_josiah
4. What is Redis?
● In-memory key/data structure server
○ Strings, lists, hashes, sets, sorted sets (plus integers and floats)
○ HyperLogLog, lexicographic prefix scans, geo index/search api (unreleased 3.2)
○ Pubsub, Lua scripting (like a stored procedure)
● Ops features:
○ Master/slave replication + sentinel for HA
○ Alternative “cluster” mode with sharding/slaving/failover
○ On-disk persistence; point-in-time or incremental
5. What is a time series database?
● Not about the database, but about the data and operations
○ Discrete events or samples of at least one value or metric over time
○ Sometimes the *event* is the value/metric
○ If it has a timestamp (or one is implied), it is part of a time series
● Examples of data:
○ Data gathered from sensors embedded inside IoT devices
■ Nest thermostat - measures temperature, analyzes your preferences…
○ Sell prices and volumes of traded stocks
○ Values and delivery locations of orders placed at an online retailer
○ Actions of users in a video game and their outcome
6. Combining structures for success
● Consider a log of user actions on a web site like:
○ {‘ts’: 1458710360.679, ‘user’: 218946, ‘type’: ‘login’, ‘ip’: ‘172.32.5.6’}
● We can get an easily sliced time series with individual events stored in hashes,
and a sorted set for an index of timestamps:
○ Increment the actions: key, which becomes the event id
○ Store each event individually in its own HASH at key: actions:<event_id>
○ Add an (<event_id>: <ts>) member/score pair to a ZSET at key: actions:ts
● To use:
○ Scan over actions:ts using ZRANGEBYSCORE or ZRANGE
○ Use HGET/HMGET/HGETALL to fetch the actions for analysis
7. Combining structures for success (reading)
-- Many-caveats about using this, you probably don't want it (not cluster compatible)!
-- KEYS should be something like:
-- {'action:ts', 'action:'}
-- ARGV should be something like:
-- (gets all events from 2016-03-23 UTC)
-- {'1458691200', '1458777600'}
local ids = redis.call('ZRANGEBYSCORE', KEYS[1], unpack(ARGV))
local results = {}
for i, id in ipairs(ids) do
local item = redis.call('HGETALL', KEYS[2]..id)
if item then
table.insert(results, item)
end
end
return cjson.encode(results)
8. Combining structures for success (writing)
-- Many-caveats about using this, you probably don't want it (not cluster compatible)!
-- KEYS should be something like:
-- {'action:'}
-- ARGV should be something like:
-- {'<json-encoded data, with ts>'}
local new_id = redis.call('INCR', KEYS[1])
local dest = KEYS[1] .. new_id
local data = cjson.decode(ARGV[1])
data.id = new_id
for k, v in pairs(data) do
redis.call('HSET', dest, k, v)
end
-- note: ZADD <key> <score> <member>
redis.call('ZADD', KEYS[1] .. 'ts', data.ts, new_id)
9. Analyzing/segmenting events
We now have events, but want to (for example) count the number of events of
different types in a time range, equivalent to:
SELECT type, COUNT(*)
FROM actions
WHERE …
GROUP BY type
● We can use Lua to scan over the items in the time range, counting them
● We can create additional ZSETs, one for each event type…
○ actions:login:ts, actions:logout:ts, …
○ Stores (<event_id>: <ts>) pairs, like actions:ts
○ Can use ZCOUNT to get a count without scanning*
10. Analyzing/segmenting events (continued)
● Or if you needed to index events by user id
○ Add (<event_id>, <user_id>) member/score pairs to actions:user
○ Easy to get total count/user with ZCOUNT (ZCOUNT actions:user <user_id> <user_id>)
○ Can partially induce a secondary timestamp index over the same data; to fetch a specific time
range for one user
■ Use ZRANGEBYSCORE and ZREVRANGEBYSCORE on actions:ts for start/end of id
range for a pair of timestamps (assuming synchronized clocks, or using Redis 3.2-
unreleased Lua side-effects propagation)
■ Either:
● Use ZRANGEBYSCORE on actions:user to scan ids and filter
● Use ZRANGEBYSCORE + ZREVRANGEBYSCORE + ZRANGE to bisect (speed
optimization, requires left padding of members with zeros)
11. Combining structures for success (revisited)
● If you know you never need to extract individual fields, and you know that your
events are unique…
○ Use a single ZSET with (<encoded json event>: <ts>) pairs
● If you are planning on bucketing based on time slot, only need a list of items
○ Use LISTs keyed as actions:<time slice>
○ Just RPUSH <encoded json event> onto these LISTs
● Approximate counts? HyperLogLog
● If you plan your metrics, you can explicitly count them in real time
○ INCR, DECR, INCRBY, INCRBYFLOAT, HINCRBY, HINCRBYFLOAT, ZINCRBY, all very fast
○ Lua scripting can be a huge win here
12. Combining structures for success (revisited)
● Redis doesn’t need to be 100% of this
○ If you only need counts over arbitrary time ranges, just action:ts or the action:login:ts
and related sorted sets may be enough
○ Redis can be your counters/aggregate/sorted set storage, even if it’s not your event storage
● HASHes + ZSETs can solve just about anything, alternatives can offer:
○ Better performance
○ More convenient access to necessary data
○ Lower memory utilization
13. Next steps
Time series is a specific example of data modeling...
● My data modeling talk from Redisconf 2015
○ Get an idea of the general process involved
● Specific examples:
○ For indexing/search: chapter 7 of RiA, my Redisconf 2012 talk
○ Other data modeling examples: the rest of RiA, my blog
○ Survey of data modeling examples: my Python with Redis talk from July 2012
● Google will know more