Discover & identify ideal storage solution for our needs by examining the history of data storage & the modern database systems including Key Value, Relational, Graph and Document databases.
This presentation was given at RootsTech 2013 in March
Spring integration을 통해_살펴본_메시징_세계Wangeun Lee
[스프링캠프 2015] Spring Integration을 통해 살펴본 메시징 세계 발표자료 입니다.
예제 소스 저장소는 프리젠테이션 안에 링크 걸어놨습니다.
감사합니다.
-------------------------------------------------------------------
우리는 늘 누군가와 소통(Communication)을 합니다. 소통을 통하여 누군가에게 일을 시키기도 하고 내가 일을 받기도 합니다. 애플리케이션도 마찬가지로 이기종간의 애플리케이션끼리 데이터로 소통을 하며 할 일을 서로 분산 처리할 상황이 발생하기도 합니다.
이런 분산 처리 이전에는 소통이 전제되어야 합니다. 애플리케이션 간 소통에 대한 고민은 선구자들에 의해 Enterprise Integration Patterns로 탄생되었으며 Spring에서도 그 패턴화의 추상화 일원으로 Spring Integration을 탄생시켰습니다.
이 강연에서는 Spring Integration을 통해 애플리케이션 간에 어떻게 쉽고 편하게(?) 소통을 할 수 있게 되었는지 살펴보며 예제와 사례를 통해 Spring Integration 입문에 도움을 주고자 합니다.
패치해야할 서버가 전 세계에 나뉘어져 있다면, 어떻게 해야 동시에, 빠르면서, 또 신뢰성있게 서버를 패치할 수 있을까?이를 구현하기 위해 개발된 1)작은 크기의 패치데이터를 빠르게 생성하는 알고리즘과 2) 글로벌 데이터 복제를 위한 기술, 3) 동시 패치와 롤백이 가능하도록 구성한 시스템의 설계와 구조에 대해서 알아본다. 또, 핵심기능에 대한 시연과 함께넥슨아메리카에서 실제로 이를 어떻게 활용하고 있는지, 현장에서 얼마만큼의 개선이 이루어졌는지를 실증적 데이터에 기반하여 공유하고자 한다.
그로스 해킹 & 데이터 프로덕트 (Growth Hacking & Data Product) - 고넥터 고영혁 (Gonnector Dylan Ko)Dylan Ko
* 해당 강연 관련 상세 인터뷰 - https://fyi.so/2Rl04JS
[목차]
1. 그로스 해킹 제대로 바라보기
2. 선택이 아닌 필수 “개인화 (Personalization)” 의 본질
3. 개인화를 구현하는 CDP (Customer Data Platform)의 글로벌 혁신 성공 사례
4. 이 모든 화두의 접점 “데이터 프로덕트 (Data Product)” 의 핵심
5. 데이터 프로덕트를 잘 만들기 위한 서비스/데이터 디자인 방법론과 기타
* 2018년 10월 29일 드림플러스 강남점에서 ㅍㅍㅅㅅ 아카데미(PPSS Academy)가 주최한 2시간 특강 '그로스 해킹과 데이터 프로덕트' 의 강연 슬라이드
[Agenda]
1. How to understand Growth Hacking properly
2. Not option but mandatory, Personalization's essence
3. Global innovation use cases of personalization using CDP(Customer Data Platform)
4. The core of Data Product, which is the base of all the above things
5. The methodology of service and data architecture design and other detail things to make a well-made data product
#그로스해킹 #데이터액션 #고넥터 #데이터사이언스 #서비스디자인 #GrowthHacking #DataAction #DataScience #Gonnector #ServiceDesign
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Salvatore Sanfilippo – How Redis Cluster works, and why - NoSQL matters Barce...NoSQLmatters
Salvatore Sanfilippo – How Redis Cluster works, and why
In this talk the algorithmic details of Redis Cluster will be exposed in order to show what were the design tensions in the clustered version of an high performance database supporting complex data type, the selected tradeoffs, and their effect on the availability and consistency of the resulting solution.Other non-chosen solutions in the design space will be illustrated for completeness.
Spring integration을 통해_살펴본_메시징_세계Wangeun Lee
[스프링캠프 2015] Spring Integration을 통해 살펴본 메시징 세계 발표자료 입니다.
예제 소스 저장소는 프리젠테이션 안에 링크 걸어놨습니다.
감사합니다.
-------------------------------------------------------------------
우리는 늘 누군가와 소통(Communication)을 합니다. 소통을 통하여 누군가에게 일을 시키기도 하고 내가 일을 받기도 합니다. 애플리케이션도 마찬가지로 이기종간의 애플리케이션끼리 데이터로 소통을 하며 할 일을 서로 분산 처리할 상황이 발생하기도 합니다.
이런 분산 처리 이전에는 소통이 전제되어야 합니다. 애플리케이션 간 소통에 대한 고민은 선구자들에 의해 Enterprise Integration Patterns로 탄생되었으며 Spring에서도 그 패턴화의 추상화 일원으로 Spring Integration을 탄생시켰습니다.
이 강연에서는 Spring Integration을 통해 애플리케이션 간에 어떻게 쉽고 편하게(?) 소통을 할 수 있게 되었는지 살펴보며 예제와 사례를 통해 Spring Integration 입문에 도움을 주고자 합니다.
패치해야할 서버가 전 세계에 나뉘어져 있다면, 어떻게 해야 동시에, 빠르면서, 또 신뢰성있게 서버를 패치할 수 있을까?이를 구현하기 위해 개발된 1)작은 크기의 패치데이터를 빠르게 생성하는 알고리즘과 2) 글로벌 데이터 복제를 위한 기술, 3) 동시 패치와 롤백이 가능하도록 구성한 시스템의 설계와 구조에 대해서 알아본다. 또, 핵심기능에 대한 시연과 함께넥슨아메리카에서 실제로 이를 어떻게 활용하고 있는지, 현장에서 얼마만큼의 개선이 이루어졌는지를 실증적 데이터에 기반하여 공유하고자 한다.
그로스 해킹 & 데이터 프로덕트 (Growth Hacking & Data Product) - 고넥터 고영혁 (Gonnector Dylan Ko)Dylan Ko
* 해당 강연 관련 상세 인터뷰 - https://fyi.so/2Rl04JS
[목차]
1. 그로스 해킹 제대로 바라보기
2. 선택이 아닌 필수 “개인화 (Personalization)” 의 본질
3. 개인화를 구현하는 CDP (Customer Data Platform)의 글로벌 혁신 성공 사례
4. 이 모든 화두의 접점 “데이터 프로덕트 (Data Product)” 의 핵심
5. 데이터 프로덕트를 잘 만들기 위한 서비스/데이터 디자인 방법론과 기타
* 2018년 10월 29일 드림플러스 강남점에서 ㅍㅍㅅㅅ 아카데미(PPSS Academy)가 주최한 2시간 특강 '그로스 해킹과 데이터 프로덕트' 의 강연 슬라이드
[Agenda]
1. How to understand Growth Hacking properly
2. Not option but mandatory, Personalization's essence
3. Global innovation use cases of personalization using CDP(Customer Data Platform)
4. The core of Data Product, which is the base of all the above things
5. The methodology of service and data architecture design and other detail things to make a well-made data product
#그로스해킹 #데이터액션 #고넥터 #데이터사이언스 #서비스디자인 #GrowthHacking #DataAction #DataScience #Gonnector #ServiceDesign
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Salvatore Sanfilippo – How Redis Cluster works, and why - NoSQL matters Barce...NoSQLmatters
Salvatore Sanfilippo – How Redis Cluster works, and why
In this talk the algorithmic details of Redis Cluster will be exposed in order to show what were the design tensions in the clustered version of an high performance database supporting complex data type, the selected tradeoffs, and their effect on the availability and consistency of the resulting solution.Other non-chosen solutions in the design space will be illustrated for completeness.
These are the slides from my talk at Hulu in March 2015 discussing Apache Spark & Cassandra. I cover the evolution of data from a single machine to RDBMS (MySQL is the primary example) to big data systems.
On the Spark side, I covered batch jobs, streaming, Apache Kafka, an introduction to machine learning, clustering, logistic regression and recommendations systems (collaborative filtering).
The talk was recorded and is available on youtube: https://www.youtube.com/watch?v=_gFgU3phogQ
The relational database model was designed to solve the problems of yesterday’s data storage requirements. The massively connected world of today presents different problems and new challenges. We’ll explore the NoSQL philosophy, before comparing and contrasting the strengths and weaknesses of the relational model versus the NoSQL model. While stepping through real-world scenarios, we’ll discuss the reasons for choosing one solution over the other.
To complete this session, let’s demonstrate our findings with an application written with a NoSQL storage layer and explain the advantages that accrue from that decision. By taking a look at the new challenges we face with our data storage needs, we’ll examine why the principles behind NoSQL make it a better candidate as a solution, than yesterday’s relational model.
Getting started with Spark & Cassandra by Jon Haddad of DatastaxData Con LA
Massively scalable, always on, and ridiculously fast. Apache Cassandra is the database chosen by Apple, Netflix, and 30 of the Fortune 100 to power their critical infrastructure. How do we analyze petabytes of data, whether it be massive batching or as it’s ingested via streaming with Apache Kafka? Enter Apache Spark. Challenging MapReduce head on, Apache Spark offers powerful constructs that make it possible to slice and dice your data, whether it be through machine learning, graph queries, as well as transformations familiar to people with functional programming backgrounds such as map, filter, and reduce. Step away ready to rock with the most powerful distributed database, scalable messaging, and analytics platform on the planet.
Watch the video here
https://www.youtube.com/watch?v=X-FKmKc9hkI
Evolution of the DBA to Data Platform Administrator/SpecialistTony Rogerson
DBA's used to be Relational Database centric for instance managing Microsoft SQL Server or Oracle, in this changing world of polyglot database environments their role has expanded not just into new platforms other than SQL but also new legal governance, modelling techniques, architecture etc. They need to have a base knowledge of Kimball, Inmon, Data Vault, what CAP theorem is, LAMBDA, Big Data, Data Science etc.
Combine Spring Data Neo4j and Spring Boot to quicklNeo4j
Speakers: Michael Hunger (Neo Technology) and Josh Long (Pivotal)
Spring Data Neo4j 3.0 is here and it supports Neo4j 2.0. Neo4j is a tiny graph database with a big punch. Graph databases are imminently suited to asking interesting questions, and doing analysis. Want to load the Facebook friend graph? Build a recommendation engine? Neo4j's just the ticket. Join Spring Data Neo4j lead Michael Hunger (@mesirii) and Spring Developer Advocate Josh Long (@starbuxman) for a look at how to build smart, graph-driven applications with Spring Data Neo4j and Spring Boot.
SQL vs. NoSQL. It's always a hard choice.Denis Reznik
This will be an interesting and sometimes fun session with a small demo. This session will answer some of your questions and force you to think about new questions. It will not be very technical, so it's ok for choose another more technical session from the schedule :) But if will decide to come, I can assure you, that you will not be disappointed. We will do a thought experiment with one famous public high-loaded website, will look at advantages and disadvantages of SQL and NoSQL databases, and will choose the best database engine for it.
Intro deck from Cassandra Day Atlanta. Covers the evolution of data storage and analysis, the architecture of Cassandra, the read & write path, and using Cassandra for analytics. By Jon Haddad & Luke Tillman
Если раньше при старте нового проекта нам нужно было выбрать одну из доступных на тот момент SQL баз данных, то за последние 5 лет ситуация кардинально изменилась. Теперь выбор стал гораздо сложнее. SQL или NoSQL? Сloud или on-premises? Если SQL/NoSQL - то какая именно? А может использовать и то и другое?
В данном докладе мы постараемся представить общий обзор доступных сегодня решений для хранения данных и определиться с критериями выбора.
State of the Gopher Nation - Golang - August 2017Steven Francia
This talk is an overview of the Go project. It covers “what we’ve done”, “why we did it” and “where we are going” as a project.
It highlights our accomplishments, challenges and how the Go Project is working on our challenges.
The Future of the Operating System - Keynote LinuxCon 2015Steven Francia
Linux has become the foundation for infrastructure everywhere as it defined application portability from the desktop to the phone and from to the data center to the cloud. As applications become increasingly distributed in nature, the Docker platform serves as the cornerstone of Linux’s evolution solidifying the dominance of Linux today and into tomorrow.
Given as a Keynote at LinuxCon 2015 in Tokyo
Given at GopherFest 2015. This is an updated version of the talk I gave in NYC Nov 14 at GothamGo.
“We need to think about failure differently. Most people think mistakes are a necessary evil. Mistakes aren't a necessary evil, they aren't evil at all. They are an inevitable consequence of doing something new and as such should be seen as valuable. “ - Ed Catmull
As Go is a "new" programming language we are all experimenting and learning how to write better Go. While most presentations focus on the destination, this presentation focuses on the journey of learning Go and the mistakes I personally made while developing Hugo, Cobra, Viper, Afero & Docker.
What every successful open source project needsSteven Francia
In the last few years open source has transformed the software industry. From Android to Wikipedia, open source is everywhere, but how does one succeed in it? While open source projects come in all shapes and sizes and all forms of governance, no matter what kind of project you’re a part of, there are a set of fundamentals that lead to success. I’d like to share some of the lessons I’ve learned from running two of the largest commercial open source projects, Docker and MongoDB, as well as some very successful community projects.
This presentation was delievered at sinfo.org in Feb 2015.
7 Common mistakes in Go and when to avoid themSteven Francia
I've spent the past two years developing some of the most popular libraries and applications written in Go. I've also made a lot of mistakes along the way. Recognizing that "The only real mistake is the one from which we learn nothing. -John Powell", I would like to share with you the mistakes that I have made over my journey with Go and how you can avoid them.
Go for Object Oriented Programmers or Object Oriented Programming without Obj...Steven Francia
Object Oriented (OO) programming has dominated software engineering for the last two decades. The paradigm built on powerful concepts such as Encapsulation, Inheritance, and Polymoprhism has been internalized by the majority of software engineers. Although Go is not OO in the strict sense, we can continue to leverage the skills we’ve honed as OO engineers to come up with simple and solid designs.
Gopher Steve Francia, Author of
[Hugo](http://hugo.spf13.com), [Cobra](http://github.com/spf13/cobra), and many
other popular Go packages makes these difficult concepts accessible for everyone.
If you’re a OO programmer, especially one with a background with dynamic languages and are curious about go then this talk is for you. We will cover everything you need to know to leverage your existing skills and quickly start coding in go including:
How to use our Object Oriented programming fundamentals in go
Static and pseudo dynamic typing in go
Building fluent interfaces in go
Using go interfaces and duck typing to simplify architecture
Common mistakes made by those coming to go from other OO languages (Ruby, Python, Javascript, etc.),
Principles of good design in go.
This presentation will give developers an introduction and practical experience
of using MongoDB with the Go language. MongoDB Chief Developer Advocate &
Gopher Steve Francia presents plainly what you need to know about using MongoDB
with Go.
As an emerging language Go is able to start fresh without years of relational database dependencies. Application and library developers are able to build applications using the excellent Mgo MongoDB driver and the reliable go sql package for relational database. Find out why some people claim Go and MongoDB are a “pair made in heaven” and “the best database driver they’ve ever used” in this talk by Gustavo Niemeyer, the author of the mgo driver, and Steve Francia, the drivers team lead at MongoDB Inc.
We will cover:
Connecting to MongoDB in various configurations
Performing basic operations in Mgo
Marshaling data to and from MongoDB
Asynchronous & Concurrent operations
Pre-fetching batches for seamless performance
Using GridFS
How MongoDB uses Mgo internally
This presentation was given as a Workshop at OSCON 2014.
New to Go? This tutorial will give developers an introduction and practical
experience in building applications with the Go language. Gopher Steve Francia,
Author of [Hugo](http://hugo.spf13.com),
[Cobra](http://github.com/spf13/cobra), and many other popular Go packages
breaks it down step by step as you build your own full featured Go application.
Starting with an introduction to the Go language. He then reviews the fantastic
go tools available. With our environment ready we will learn by doing. The
remainder of the time will be dedicated to building a working go web and cli
application. Through our application development experience we will introduce
key features, libraries and best practices of using Go.
This tutorial is designed with developers in mind. Prior experience with any of the
following languages: ruby, perl, java, c#, javascript, php, node.js, or python
is preferred. We will be using the MongoDB database as a backend for our
application.
We will be using/learning a variety of libraries including:
* bytes and strings
* templates
* net/http
* io, fmt, errors
* cobra
* mgo
* Gin
* Go.Rice
* Cobra
* Viper
MongoDB, Hadoop and humongous data - MongoSV 2012Steven Francia
Learn how to integrate MongoDB with Hadoop for large-scale distributed data processing. Using tools like MapReduce, Pig and Streaming you will learn how to do analytics and ETL on large datasets with the ability to load and save data against MongoDB. With Hadoop MapReduce, Java and Scala programmers will find a native solution for using MapReduce to process their data with MongoDB. Programmers of all kinds will find a new way to work with ETL using Pig to extract and analyze large datasets and persist the results to MongoDB. Python and Ruby Programmers can rejoice as well in a new way to write native Mongo MapReduce using the Hadoop Streaming interfaces.
While Hadoop is the most well-known technology in big data, it’s not always the most approachable or appropriate solution for data storage and processing. In this session you’ll learn about enterprise NoSQL architectures, with examples drawn from real-world deployments, as well as how to apply big data regardless of the size of your own enterprise.
This tutorial will introduce the features of MongoDB by building a simple location-based application using MongoDB. The tutorial will cover the basics of MongoDB’s document model, query language, map-reduce framework and deployment architecture.
The tutorial will be divided into 5 sections:
Data modeling with MongoDB: documents, collections and databases
Querying your data: simple queries, geospatial queries, and text-searching
Writes and updates: using MongoDB’s atomic update modifiers
Trending and analytics: Using mapreduce and MongoDB’s aggregation framework
Deploying the sample application
Besides the knowledge to start building their own applications with MongoDB, attendees will finish the session with a working application they use to check into locations around Portland from any HTML5 enabled phone!
TUTORIAL PREREQUISITES
Each attendee should have a running version of MongoDB. Preferably the latest unstable release 2.1.x, but any install after 2.0 should be fine. You can dowload MongoDB at http://www.mongodb.org/downloads.
Instructions for installing MongoDB are at http://docs.mongodb.org/manual/installation/.
Additionally we will be building an app in Ruby. Ruby 1.9.3+ is required for this. The current latest version of ruby is 1.9.3-p194.
For windows download the http://rubyinstaller.org/
For OSX download http://unfiniti.com/software/mac/jewelrybox/
For linux most users should know how to for their own distributions.
We will be using the following GEMs and they MUST BE installed ahead of time so you can be ahead of the game and safe in the event that the Internet isn’t accommodating.
bson (1.6.4)
bson_ext (1.6.4)
haml (3.1.4)
mongo (1.6.4)
rack (1.4.1)
rack-protection (1.2.0)
rack shotgun (0.9)
sinatra (1.3.2)
tilt (1.3.3)
Prior ruby experience isn’t required for this. We will NOT be using rails for this app.
Replication, Durability, and Disaster RecoverySteven Francia
This session introduces the basic components of high availability before going into a deep dive on MongoDB replication. We'll explore some of the advanced capabilities with MongoDB replication and best practices to ensure data durability and redundancy. We'll also look at various deployment scenarios and disaster recovery configurations.
Strategies for multi-data center deployment. Diving into the details of deploying of MongoDB across multiple data centers.
Covers the advantages of a multi data center deployment for read/write locality, the various deployment strategies, and disaster preparedness and recovery.
In addition, we’ll look at the MongoDB roadmap and planned enhancements around data center awareness.
This presentation was given at MongoNYC 2012. The animations didn’t survive the transformation to the web, so not all the meaning carries over perfectly.
An unprecedented amount of data is being created and is accessible. This presentation will instruct on using the new NoSQL technologies to make sense of all this data.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. @spf13
AKA
Steve Francia
Chief Evangelist @
responsible for drivers,
integrations, web & docs
3. What’s the Point?
๏ Goal: Discover & identify ideal
storage solution for our needs
๏ History is important
๏ Many options today
๏ Document databases are good
for Genealogy
24. 1960 : DBMS Emerges
๏ Ordered set of fixed length fields
๏ Low level pointer operations (flat
files)
๏ Most popular was IMS (created at
IBM)
๏ Shockingly still in use today at IBM &
American Airlines
25. Lots of Problems
๏ Complex and inflexible
๏ User had to know physical structure of the
DB in order to query for information
๏ Adding a field to the DB required rewriting
the underlying access/modification scheme
๏ Records isolated (no relations)
๏ Emphasis on records to be processed, not
overall structure
26. 1970 : Relational DB
๏ Edgar Frank “Ted” Codd
๏ Relational Database
theory
๏ Codd’s 13 rules
(aka 12 rules)
27. 3 HUGE Advantages
๏ Data independence from hardware
and storage implementation
๏ Ability to process more than one
record at a time with a single
operation
๏ Establishing a relationship
between records
28. IBM vs Codd
๏ IBM bet on IMS
๏ Codd bets on relational DB
๏ Eventually
2 relational
prototypes emerge
29. Ingres
๏ Built at UC Berkley
๏ Uses QUEL
๏ Inspires Sybase & MSSQL
30. System R
๏ Built at IBM
๏ Leads to SEQUEL... later SQL
๏ Evolved into SQL/DS which
evolved into DB2
๏ Project concludes that relational
model is viable
31. Oracle
๏ Larry Ellison watches IBM
๏ Starts Relational Software Inc.
๏ Oracle 1st commercial RDBMS
released in 1979
๏ Beats IBM by 2 years to market
32. Entity Relationship
๏ Proposed by Peter
Chen in 1976
๏ Focuses on data use
and not logical table
structure
33. 1980s
๏ RDBMS dominates
๏ Some fields (medicine,
physics, multimedia) need
more than RDBMS offers
๏ Object Databases emerge
34. Object Databases
๏ Inspired by Entity Relationship
๏ More flexible than relational permits
๏ Tightly coupled with OO
programming language (c++, later
Java)
๏ Full object: data & methods stored
35. 1990s
๏ Internet emerges
๏ Data demand spikes
๏ Databases used for
archiving historical data
36. Early 2000s
๏ Internet booms
๏ RDBMS fails to scale
๏ Indesperation we take a
step backwards
37. MemcacheD
๏1 dimensional
๏ No persistence
๏ No ACI or D
๏ but...
39. 2005 ish
๏ Relational + MemcacheD
broken (and we didn’t know it)
๏ Scale redefined with high
volume & social
๏ Infrastructure reinvented with
cloud computing & SSDs
42. A lot going on
Easiest to define databases in
broad terms
• What is a record?
(data model)
• CAP : CA, AP, CP ?
(infrastructure model)
43. Data Storage Structure
1D 2D nD
Key Key Value Key Value(s)
Key Value Key Value(s)
Value Key Value Key
Key Value Key Value
Key Value(s)
Key
Key Value
Key Value(s)
47. CAP Theorem
Availability
Dynamo
RDBMS
t
Key Value
ten
Int
o
sis
ler
NoSQLs
on
ant
Inc
Unavailable
Partition Consistency
Tolerant MongoDB
BigTable
48. Key Value
๏ ๏ Often
1 Dimensional
storage (tupal) MultiMaster...
๏
meaning
Query key only availability over
๏ Bucket index consistency
(range) on keys ๏ Partitioning easy
๏ Records cannot be thanks to single
updated, only value
replaced
Cassandra, Redis, MemcacheD, Riak, DynamoDB
49. Relational
๏ Single master
๏ 2 Dimensional
storage (map) meaning
consistency >
๏ Query any availability
field ๏ Partitioning hard
๏ due to
BTree Indexes transactions &
joins
Oracle, MSSQL, MySQL, PostgreSQL, DB2
50. Document
๏ ๏ Single master
n Dimensional
storage (hash meaning
w/ nesting) consistency >
availability
๏ Query any field
๏ Partitioning easy
at any level
thanks to richer
๏ BTree Indexes data model
MongoDB, CouchDB, RethinkDB
51. Graph
๏ 1 Dimensional storage... but grouped to appear
2D
๏ Differentiated by indexes
๏ Large indexes cover many relationships
๏ Query time depends on # records returned,
not distance to get them
๏ Doesn’t require traversing to determine
relationship
Neo4j, about 20 more... nobody talks much about
54. Types of
genealogy data
๏
Events ๏
Photographs
(birth, death, etc)
๏
๏ Diaries & letters
Official records
๏
๏ Ship passenger list
Census
๏
๏ Occupation
Names
๏
๏ and more
Relationships
55. Challenges of
genealogy data
๏
Lots of possible data points... need flexible
schema
๏
Multiple versions of same data point
(3 different dates for death date, 4 variations on
name).
๏
Lots of data associated with physical records
๏
Multiple versions of same nodes
(intelligent nondestructive merge needed)
๏
Need to have meta data associated
56. Individual User
Events[] • Name
• AFN • type • Email Address
• Modification Date • date • Password
• contributor[] • Individual_id
• record[]
Name
• First[]
• Middle[] Location
• Last[] • city
• state
• county
Record
• contributor
• country • type
• coordinates[] • thumbnail
• content
• description
• tags[]
63. MongoDB: Scale built in
๏ Intelligent replication
๏ Automatic partitioning of data
(user configurable)
๏ Horizontal Scale
๏ Targeted Queries
๏ Parallel Processing
64. Intelligent Replication
Node 1 Node 2
Secondary Secondary
Heartbeat
Re
on
p
i
cat
lic
ati
pli
on
Re
Node 3
Primary
65. Scalable Architecture
App Server App Server App Server
Mongos Mongos Mongos
Config
Node 1
Server
Secondary
Config
Node 1
Server
Secondary
Config
Node 1
Server
Secondary
Shard Shard Shard
70. Broad Feature Set
๏ Rich query language
๏ Native support for over 12 languages
๏ GeoSpatial
๏ Text search
๏ Aggregation & MapReduce
๏ GridFS
(distributed & replicated file storage)
๏ Integration with Hadoop, Solr & more