Kako bo ransomware spremenil svet IOT, kako ga že spreminja in kaj bi bilo treba spremeniti takoj, da ne bo še slabše?
Predavanje na http://www.cryptoparty.si/2017/09/14/iot-meetup-2017-tadej-hren-si-cert-iot-in-izsiljevalski-virusi/
Delavnica o načrtovanju spletnih komunikacij s poudarkom na načrtovanju in kontinuiranem produciranju vsebin z mislijo na uporabnika in ciljno javnost.
Slovensko društvo za odnose z javnostmi: Je že čas za paniko?Domen Savič
Vabljeno predavanje na slovenskem društvu za odnose z javnostmi. Več: http://www.piar.si/aktualno/dogodki/dogodki-prss/splet-in-javni-sektor-nujno-zlo-ali-priloznost-za-razvoj/
Kako bo ransomware spremenil svet IOT, kako ga že spreminja in kaj bi bilo treba spremeniti takoj, da ne bo še slabše?
Predavanje na http://www.cryptoparty.si/2017/09/14/iot-meetup-2017-tadej-hren-si-cert-iot-in-izsiljevalski-virusi/
Delavnica o načrtovanju spletnih komunikacij s poudarkom na načrtovanju in kontinuiranem produciranju vsebin z mislijo na uporabnika in ciljno javnost.
Slovensko društvo za odnose z javnostmi: Je že čas za paniko?Domen Savič
Vabljeno predavanje na slovenskem društvu za odnose z javnostmi. Več: http://www.piar.si/aktualno/dogodki/dogodki-prss/splet-in-javni-sektor-nujno-zlo-ali-priloznost-za-razvoj/
Nevtralnost interneta in slovenska ekonomijaDomen Savič
Kako bo evropska zakonodaja vplivala na nevtralnost interneta v Sloveniji, kdo so glavni akterji na tem področju in zakaj je tako pomembno, da se naš glas sliši v javni razpravi.
"Employee engagement" (zavzetost zaposlenih) je pozitivno stanje zaposlenih z vidika čustev, znanja in vedenja; je trenutek, ko zaposleni mislijo, čutijo in delajo v skladu z organizacijskimi cilji, ker zares to verjamejo, obenem pa je intenzivno občutenje individualne navezanosti na organizacijo, delo in sodelavce.
Marjeta Tič Vesel, Pristop, v okviru mesečnega srečanja Društva za marketing Slovenije.
Why is media literacy absolutely necessary in this day and age, what does it mean to be media literate and how did the media industry develop in the past?
Optimizing the Profitable Link Between Employees and Customer Loyalty BehaviorAquent
The document discusses research on employee ambassadorship and its link to customer loyalty and business results. It presents a conceptual model showing that emotionally and rationally committed employees can become advocates who actively promote the brand, while disconnected employees may become saboteurs who negatively impact reputation. Research findings demonstrate strong correlations between employee commitment measures and customer loyalty/satisfaction ratings.
Pedagoška Psihologija 1, Predavanje na Oddelku za psihologijo, Univerza v Mariboru
Educational Psychology, Lecture at the Department of Psychology, The University of Maribor
The document discusses tools for building the future and their impact. It notes that the speed of iteration matters and that countless hours are lost building administrative interfaces and integrations. It advocates building using a library of standardized reusable components and learning from blockchains about openness and lowering friction. Trends to watch include no-code, augmenting human intelligence with AI, and API-first and systems-level thinking.
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to truly fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way.
This presentation is a meditation on how I approach data problems with Clojure, what I believe the process of doing data science should look like and the tools needed to get there. Some already exist (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time.
Clojure is fantastic for data manipulation and rapid prototyping, but falls short when it comes to communicating your insights. What is lacking are good visualization libraries and (shareable) notebook-like environments. I'll show my workflow in org-babel which weaves Clojure with R (for ggplot) and Python (for scikit-learn) and tell you why it's wrong, how IPythons of the world have trapped us in a local maximum and how we need a reconceptualization similar to what a REPL does to programming. All this interposed with my experience doing data science with Clojure (everything from ETL to on-the-spot analysis during a brainstorming).
The document discusses tools for data analysis and building intelligence including Metabase, an open source business intelligence tool used by over 21,000 companies daily. It focuses on speeding up the time it takes to answer questions from data through automation and building a "data scientist in a box". The goal is to answer 80% of questions from data in under 20 minutes to facilitate real-time exploration and problem solving.
The document provides guidance on leveling up a company's data infrastructure and analytics capabilities. It recommends starting by acquiring and storing data from various sources in a data warehouse. The data should then be transformed into a usable shape before performing analytics. When setting up the infrastructure, the document emphasizes collecting user requirements, designing the data warehouse around key data aspects, and choosing technology that supports iteration, extensibility and prevents data loss. It also provides tips for creating effective dashboards and exploratory analysis. Examples of implementing this approach for two sample companies, MESI and SalesGenomics, are discussed.
Recommendation algorithms and their variations such as ranking are the most common way for machine learning to find its way into a product where it is not the main focus. In this talk we’ll dig into the subtleties of making recommendation algorithms a seamless and integral part of your UX (goal: it should completely fade into the background. The user should not be aware she’s interacting with any kind of machine learning, it should just feel right, perhaps smart or even a tad like cheating); how to solve the cold start problem (and having little training data in general); and how to effectively collect feedback data. I’ll be drawing from my experiences building Metabase, an open source analytics/BI tool, where we extensively use recommendations and ranking to keep users in a state of flow when exploring data; to help with discoverability; and as a way to gently teach analysis and visualization best practices; all on the way towards building an AI data scientist.
This document summarizes Metabase, an open source business intelligence and analytics tool that runs on-premise and is data-agnostic. Metabase is used by over 13,000 companies daily, including Go-Jek which has 4,000 daily active users. Some common use cases for Metabase include exploratory analysis, product development, product analytics, support, customer success, BI dashboarding, and marketing. The document also discusses how Metabase can be used for data-driven product development, such as segmenting users by usage and analyzing feature usage.
In this talk we will look at how to efficiently (in both space and time) summarize large, potentially unbounded, streams of data by approximating the underlying distribution using so-called sketch algorithms. The main approach we are going to be looking at is summarization via histograms. Histograms have a number of desirable properties: they work well in an on-line setting, are embarrassingly parallel, and are space-bound. Not to mention they capture the entire (empirical) distribution which is something that otherwise often gets lost when doing descriptive statistics. Building from that we will delve into related problems of sampling in a stream setting, and updating in a batch setting; and highlight some cool tricks such as capturing time-dynamics via data snapshotting. To finish off we will touch upon algorithms to summarize categorical data, most notably count-min sketch.
Transducers -- composable algorithmic transformation decoupled from input or output sources -- are Clojure’s take on data transformation. In this talk we will look at what makes a transducer; push their composability to the limit chasing the panacea of building complex single-pass transformations out of reusable components (eg. calculating a bunch of descriptive statistics like sum, sum of squares, mean, variance, ... in a single pass without resorting to a spaghetti ball fold); explore how the fact they are decoupled from input and output traversal opens up some interesting possibilities as they can be made to work in both online and batch settings; all drawing from practical examples of using Clojure to analize “awkward-size” data.
Your metrics are wrong according to the document. The document provides reasons why metrics may be wrong and recommendations on how to improve them. Specifically, it recommends thinking in terms of distributions and segmentation rather than aggregates, understanding that populations are dynamic rather than static, determining what is signal versus noise, considering reference points and reproducibility, and documenting metric definitions.
Writing correct smart contract is hard (a recent study estimated that 3% of Ethereum contracts in the wild have some sort of security vulnerability; we all know of the DAO and Parity exploits, …). There are two main reasons for this. First and foremost developing for the blockchain is quite different than what most programmers are used to. The level of concurrency is far beyond our (von Neumann) intuition and mental models. And you can’t stop and inspect running code like you can in other systems. Taken together blockchain is closer to a physical/living system than conventional software — a fact not reflected in the tools available. Compared to other domains our tooling and programming languages are somewhere between rudimentary and bad; and a far cry from where they would need to be to augment developers and help make programming for the blockchain less alien and less error prone. In this talk we will first unpack what makes programming for the blockchain hard, and what are the most common types of vulnerabilities and their causes. Then we will look at the state of art programming language research in correctness proving and programming massively concurrent systems; and how these can be applied to programming smart contracts; revisit some technologies from the past that didn’t get traction at the time, but are nevertheless worth studying; and finishing off by trying to imagine how programming for the blockchain should, and perhaps one day will, look like.
Nevtralnost interneta in slovenska ekonomijaDomen Savič
Kako bo evropska zakonodaja vplivala na nevtralnost interneta v Sloveniji, kdo so glavni akterji na tem področju in zakaj je tako pomembno, da se naš glas sliši v javni razpravi.
"Employee engagement" (zavzetost zaposlenih) je pozitivno stanje zaposlenih z vidika čustev, znanja in vedenja; je trenutek, ko zaposleni mislijo, čutijo in delajo v skladu z organizacijskimi cilji, ker zares to verjamejo, obenem pa je intenzivno občutenje individualne navezanosti na organizacijo, delo in sodelavce.
Marjeta Tič Vesel, Pristop, v okviru mesečnega srečanja Društva za marketing Slovenije.
Why is media literacy absolutely necessary in this day and age, what does it mean to be media literate and how did the media industry develop in the past?
Optimizing the Profitable Link Between Employees and Customer Loyalty BehaviorAquent
The document discusses research on employee ambassadorship and its link to customer loyalty and business results. It presents a conceptual model showing that emotionally and rationally committed employees can become advocates who actively promote the brand, while disconnected employees may become saboteurs who negatively impact reputation. Research findings demonstrate strong correlations between employee commitment measures and customer loyalty/satisfaction ratings.
Pedagoška Psihologija 1, Predavanje na Oddelku za psihologijo, Univerza v Mariboru
Educational Psychology, Lecture at the Department of Psychology, The University of Maribor
The document discusses tools for building the future and their impact. It notes that the speed of iteration matters and that countless hours are lost building administrative interfaces and integrations. It advocates building using a library of standardized reusable components and learning from blockchains about openness and lowering friction. Trends to watch include no-code, augmenting human intelligence with AI, and API-first and systems-level thinking.
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to truly fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way.
This presentation is a meditation on how I approach data problems with Clojure, what I believe the process of doing data science should look like and the tools needed to get there. Some already exist (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time.
Clojure is fantastic for data manipulation and rapid prototyping, but falls short when it comes to communicating your insights. What is lacking are good visualization libraries and (shareable) notebook-like environments. I'll show my workflow in org-babel which weaves Clojure with R (for ggplot) and Python (for scikit-learn) and tell you why it's wrong, how IPythons of the world have trapped us in a local maximum and how we need a reconceptualization similar to what a REPL does to programming. All this interposed with my experience doing data science with Clojure (everything from ETL to on-the-spot analysis during a brainstorming).
The document discusses tools for data analysis and building intelligence including Metabase, an open source business intelligence tool used by over 21,000 companies daily. It focuses on speeding up the time it takes to answer questions from data through automation and building a "data scientist in a box". The goal is to answer 80% of questions from data in under 20 minutes to facilitate real-time exploration and problem solving.
The document provides guidance on leveling up a company's data infrastructure and analytics capabilities. It recommends starting by acquiring and storing data from various sources in a data warehouse. The data should then be transformed into a usable shape before performing analytics. When setting up the infrastructure, the document emphasizes collecting user requirements, designing the data warehouse around key data aspects, and choosing technology that supports iteration, extensibility and prevents data loss. It also provides tips for creating effective dashboards and exploratory analysis. Examples of implementing this approach for two sample companies, MESI and SalesGenomics, are discussed.
Recommendation algorithms and their variations such as ranking are the most common way for machine learning to find its way into a product where it is not the main focus. In this talk we’ll dig into the subtleties of making recommendation algorithms a seamless and integral part of your UX (goal: it should completely fade into the background. The user should not be aware she’s interacting with any kind of machine learning, it should just feel right, perhaps smart or even a tad like cheating); how to solve the cold start problem (and having little training data in general); and how to effectively collect feedback data. I’ll be drawing from my experiences building Metabase, an open source analytics/BI tool, where we extensively use recommendations and ranking to keep users in a state of flow when exploring data; to help with discoverability; and as a way to gently teach analysis and visualization best practices; all on the way towards building an AI data scientist.
This document summarizes Metabase, an open source business intelligence and analytics tool that runs on-premise and is data-agnostic. Metabase is used by over 13,000 companies daily, including Go-Jek which has 4,000 daily active users. Some common use cases for Metabase include exploratory analysis, product development, product analytics, support, customer success, BI dashboarding, and marketing. The document also discusses how Metabase can be used for data-driven product development, such as segmenting users by usage and analyzing feature usage.
In this talk we will look at how to efficiently (in both space and time) summarize large, potentially unbounded, streams of data by approximating the underlying distribution using so-called sketch algorithms. The main approach we are going to be looking at is summarization via histograms. Histograms have a number of desirable properties: they work well in an on-line setting, are embarrassingly parallel, and are space-bound. Not to mention they capture the entire (empirical) distribution which is something that otherwise often gets lost when doing descriptive statistics. Building from that we will delve into related problems of sampling in a stream setting, and updating in a batch setting; and highlight some cool tricks such as capturing time-dynamics via data snapshotting. To finish off we will touch upon algorithms to summarize categorical data, most notably count-min sketch.
Transducers -- composable algorithmic transformation decoupled from input or output sources -- are Clojure’s take on data transformation. In this talk we will look at what makes a transducer; push their composability to the limit chasing the panacea of building complex single-pass transformations out of reusable components (eg. calculating a bunch of descriptive statistics like sum, sum of squares, mean, variance, ... in a single pass without resorting to a spaghetti ball fold); explore how the fact they are decoupled from input and output traversal opens up some interesting possibilities as they can be made to work in both online and batch settings; all drawing from practical examples of using Clojure to analize “awkward-size” data.
Your metrics are wrong according to the document. The document provides reasons why metrics may be wrong and recommendations on how to improve them. Specifically, it recommends thinking in terms of distributions and segmentation rather than aggregates, understanding that populations are dynamic rather than static, determining what is signal versus noise, considering reference points and reproducibility, and documenting metric definitions.
Writing correct smart contract is hard (a recent study estimated that 3% of Ethereum contracts in the wild have some sort of security vulnerability; we all know of the DAO and Parity exploits, …). There are two main reasons for this. First and foremost developing for the blockchain is quite different than what most programmers are used to. The level of concurrency is far beyond our (von Neumann) intuition and mental models. And you can’t stop and inspect running code like you can in other systems. Taken together blockchain is closer to a physical/living system than conventional software — a fact not reflected in the tools available. Compared to other domains our tooling and programming languages are somewhere between rudimentary and bad; and a far cry from where they would need to be to augment developers and help make programming for the blockchain less alien and less error prone. In this talk we will first unpack what makes programming for the blockchain hard, and what are the most common types of vulnerabilities and their causes. Then we will look at the state of art programming language research in correctness proving and programming massively concurrent systems; and how these can be applied to programming smart contracts; revisit some technologies from the past that didn’t get traction at the time, but are nevertheless worth studying; and finishing off by trying to imagine how programming for the blockchain should, and perhaps one day will, look like.
Online statistical analysis using transducers and sketch algorithmsSimon Belak
Online statistical analysis using transducers and sketch algorithms. Don’t know what either is? You are going to learn something very cool (and perspective-changing) then. Know them, but want an experience report? Got you covered, fam.
OpenAI recently published a fun paper where they showed using evolution algorithms to train policy networks to perform on par with state of the art reinforcement deep learning. In this talk we’ll try to reimplement the main ideas in that paper using Neanderthal (blazing fast matrix and linear algebra computations) and Cortex (neural networks); make it massively distributed using Onyx; build a simulation environment using re-frame; and of course save our princess from no particular harm in our toy game example
How to systematically open a new market where every step is supported by data, how to set up learning loops, and where to look for optimization opportunities.
You can do cool and unexpected things if your entire type system is a first class citizen and accessible at runtime.
With the introduction of spec, Clojure got its own distinct spin on a type system. Just as macros add another -time (runtime and compile time) where the full power of the language can be used, spec does to describing data.
The result is an entire additional type system that is a first class citizen and accessible at runtime that facilitates validation, generative testing (a la QuickCheck), destructuring (pattern matching into deeply nested data), data macros (recursive transformations of data) and a pluginable error system. And then you can start building on top of it.
The talk will be half introduction to spec and the ideas packed within it, and half experience report instrumenting 15k loc production codebase (primarily ETL and analytics) with spec.
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a streaming data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; the inferences and automations that can be built on top of that; and how and why Clojure is a natural choice for tasks that involve a lot of data manipulation, touching both on functional programming and lisp-specifics such as code-is-data.
We will look at how such an approach can be used to manage a data warehouse by automatically inferring materialized views from raw incoming data or other views based on a combination of heuristics, statistical analysis (seasonality, outlier removal, ...) and predefined ontologies. Doing so is a practical way to maintain a large number of views, increasing their availability and abstracting the complexity into declarative rules, rather than having an ETL pipeline with dozens or even hundreds of hand crafted tasks.
The system described requires relatively little effort upfront but can easily grow with one's needs both in terms of scale as well as scope. With its good introspection capabilities and strong decoupling it is for instance an excellent substrate for putting machine learning algorithms in production, which is the final use-case we will dive into.
Segmentacija je ključna za učinkovito nagovarjanje in konvertiranje potancialnih strank. Simon Belak, vodja analitike pri GoOptiju in transmedijski urednik pri kritičnem časopisu Tribuna, je razkril, kako odkrivati segmente iz podatkov.
Po njegovih besedah je povsem neupravičeno, da je segmentacija povečini statična in narejena na slepo, neupoštevajoč podatke. V predavanju je predstavil aletrnativo: analitično delno avtomatično odkrivanje segmentov iz podatkov.
Na konkretnih primerih je pokazal, kako preslikati podatke o interakcijah s strankami (obisk strani kot pokazatelji interesov, odgovori na ankete, vzorci premikanja po straneh, odpiranje emailov…) v model strank in nadaljeval z razdelitvijo v segmente. Simon je za konec izpostavil najpogostejše pasti in drobne trike za primere, ko imamo malo podatkov, ali so le-ti nejasni.
@sbelak
Simon Belak
Using Onyx in anger
Clojure has always been good at manipulating data. With the release of spec and Onyx ("masterless, cloud scale, fault tolerant, high performance distributed computation system") good became best. In this talk I will walk you through a data layer architecture build around Kafka an Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
Whenever a programming language comes out with a new feature, us smug lisp weenies shrug and point out how lisp had that in the early seventies; and if you look at the list of influences of a given language, there is bound to be a lisp in there. In this talk I will try to unpack what makes lisp special, why it is called programming programming language , how it changes one’s thinking, and how that thinking can be applied elsewhere.