Microservice environments with databases often grow to be a complex architecture behind the scenes to the point where requirements can’t be met. This talk will show how to run a scalable stack with persistent data storage based on Docker and how that will lead to less grey hairs on the Ops team.
Open Machine Data Analysis Stack with Docker, CrateDB, and Grafana @Chadev+LunchClaus Matzinger
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
Sensordaten analysieren mit Docker, CrateDB und GrafanaClaus Matzinger
Predictive analytics, Internet of Things, Industrie 4.0: Begriffe, die in aller Munde sind. Wie aber sehen echte Installationen aus? Wie können containerbasierte Microservices den Deploymentprozess vereinfachen und gleichzeitig die Produktivität erhöhen? Claus Matzinger von Crate.io wird in diesem Vortrag all diese Fragen beantworten und mittels Raspberry Pis, Grafana und Rust einige Best Practices aus der "echten Welt" vorstellen.
Containerized DBs in a Machine Data environment with Crate.ioClaus Matzinger
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
OSDC 2017 - Claus Matzinger - An Open Machine Data Analysis Srack with Docker...NETWAYS
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
Big and fast a quest for relevant and real-time analyticsNatalino Busa
Our retail banking market demands now more than ever to stay close to our customers, and to carefully understand what services, products, and wishes are relevant for each customer at any given time.
This sort of marketing research is often beyond the capacity of traditional BI reporting frameworks. In this talk, we illustrate how we team up data scientists and big data engineers in order to create and scale distributed analyses on a big data platform.
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / rotierte Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / rotated view)
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / komplette Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / complete view)
Open Machine Data Analysis Stack with Docker, CrateDB, and Grafana @Chadev+LunchClaus Matzinger
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
Sensordaten analysieren mit Docker, CrateDB und GrafanaClaus Matzinger
Predictive analytics, Internet of Things, Industrie 4.0: Begriffe, die in aller Munde sind. Wie aber sehen echte Installationen aus? Wie können containerbasierte Microservices den Deploymentprozess vereinfachen und gleichzeitig die Produktivität erhöhen? Claus Matzinger von Crate.io wird in diesem Vortrag all diese Fragen beantworten und mittels Raspberry Pis, Grafana und Rust einige Best Practices aus der "echten Welt" vorstellen.
Containerized DBs in a Machine Data environment with Crate.ioClaus Matzinger
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
OSDC 2017 - Claus Matzinger - An Open Machine Data Analysis Srack with Docker...NETWAYS
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
Big and fast a quest for relevant and real-time analyticsNatalino Busa
Our retail banking market demands now more than ever to stay close to our customers, and to carefully understand what services, products, and wishes are relevant for each customer at any given time.
This sort of marketing research is often beyond the capacity of traditional BI reporting frameworks. In this talk, we illustrate how we team up data scientists and big data engineers in order to create and scale distributed analyses on a big data platform.
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / rotierte Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / rotated view)
Die QAware Big Data Landscape gibt einen detaillierten Überblick zu den relevantesten Big Data Technologien, vornehmlich aus dem Open-Source-Ökosystem.
(Dokument bitte herunterladen für bessere Lesbarkeit / komplette Ansicht)
-----------------
The QAware Big Data Landscape provides a detailed overview over the most relevant Big Data technologies, most of them open source.
(Please download file for a better readability / complete view)
A Hadoop User Group (HUG) Ireland talk on Data Science production environments and their online set up using #ExpertModels by Cronan McNamara, CEO @CremeGlobal
How To Use Kafka and Druid to Tame Your Router Data (Rachel Pedreschi, Imply ...confluent
Do you know who is knocking on your network’s door? Have new regulations left you scratching your head on how to handle what is happening in your network? Network flow data helps answer many questions across a multitude of use cases including network security, performance, capacity planning, routing, operational troubleshooting and more. Today’s modern day streaming data pipelines need to include tools that can scale to meet the demands of these service providers while continuing to provide responsive answers to difficult questions. In addition to stream processing, data needs to be stored in a redundant, operationally focused database to provide fast, reliable answers to critical questions. Together, Kafka and Druid work together to create such a pipeline.
In this talk Eric Graham and Rachel Pedreschi will discuss these pipelines and cover the following topics:
-Network flow use cases and why this data is important.
-Reference architectures from production systems at a major international Bank.
-Why Kafka and Druid and other OSS tools for Network Flows.
-A demo of one such system.
CrateDB can help make working with sensor data at scale easier than ever. Join us as we take you from download through everything you need to know to put CrateDB to work with your sensor data.
What you'll learn:
How to set up your CrateDB instance
Database design - partitioning and sharding
How to insert, query and connect with CrateDB
Hasura 2.0 is our biggest release since we launched.
This webinar goes over the new capabilities in our v2 release:
- Connect to multiple databases
- Generate REST APIs
- Enhanced Authorisation
- Hasura Cloud & AWS VPC Peering
Linkurious Enterprise is compatible with Azure Cosmos DB and offers investigation teams a turnkey solution to detect and investigate threats hidden in graph data. In this post, we explain how Linkurious Enterprise connects to Cosmos DB graph database.
Yo. big data. understanding data science in the era of big data.Natalino Busa
We talk a lot these days about data science, and how it will pave our paths with beautiful insights and unexpected new relations and connections in our given datasets, and even across datasets.
But how to maintain the "Science" part in "Data Science"? After some time working in this field I appreciate more and more the critical thinking which has characterized the progress in science.
Hypothesis, facts, prove and/or disprove the thesis. This is how science has progressed in the past centuries. This method has been formalized by Popper and categorize as non-science all disciplines where the statements cannot be falsified. In other words, if a statement cannot be disproved, we cannot talk of science, since there is no mechanism to left to verify the solution or to prove it wrong.
When that happens the argument can still be accepted, but not scientifically accepted. Ways of accepting or refuting a non falsifiable statement are for instance based on aesthetic, authority or pragmatic or philosophical considerations. All valid but not scientific. This applies for instance to statements in the disciplines of politics, teology, ethics, etc.
Science has definitely progressed since then. For instance, Bayesian networks and statistical inductions are currently part of the arsenal of the (data) scientist weapons. But, no matter how the baseline is set, critical thinking and a rigorous method are definitely helpful in understanding the results produced by science in particular when this is based on large amount of data and computational in nature, rather than formula/model driven.
Data Science has currently many different connotations. On one side it praises the "artistry", the genius of laying out connections between disciplines and concepts. This is a truly great aspect of scientists and creativity is definitely very welcome in all data science profiles.
With the fun of creating new insights and new data golden eggs, a data scientist has to put up with those annoying criteria of reproducibility, falsifiability and peer reviewing. Sometimes these elements are postponed or left behind in name of the artistry. Granted, it's just hard to find metrics and baselines in order to compare models and data science solutions. But the scientific method has proven to be solid over the centuries and has proven to allow factual scientific discussion between scientists and a to allow selection between models based on objective agreed criteria.
Hands on experience in real-time data process with AWS Kinesis, Firehose, S3 ...Chuan-Yen Chiang
Hands-on experience in building real-time data process pipeline with AWS Kinesis, Firehose, S3 and Athena. And why we migrate our data analysis job from Google BigQuery to AWS Athena
Your Pay Cheque has nothing to do with your retirement savingsProf. Simply Simple
Small savings and investments help in accumulating for your Retirement corpus. Investing in equity mutual funds will help your money grow faster. START Investing early and STAY INVESTED in equity investments for a long time to gain the power of compounding.
Aristotle's Storytelling Framework for the WebJeroen van Geel
A step-by-step approach, using Aristotle’s view on Greek tragedy as it’s core, that will help designers design solid interactive projects that engage customers in the right way.
A Hadoop User Group (HUG) Ireland talk on Data Science production environments and their online set up using #ExpertModels by Cronan McNamara, CEO @CremeGlobal
How To Use Kafka and Druid to Tame Your Router Data (Rachel Pedreschi, Imply ...confluent
Do you know who is knocking on your network’s door? Have new regulations left you scratching your head on how to handle what is happening in your network? Network flow data helps answer many questions across a multitude of use cases including network security, performance, capacity planning, routing, operational troubleshooting and more. Today’s modern day streaming data pipelines need to include tools that can scale to meet the demands of these service providers while continuing to provide responsive answers to difficult questions. In addition to stream processing, data needs to be stored in a redundant, operationally focused database to provide fast, reliable answers to critical questions. Together, Kafka and Druid work together to create such a pipeline.
In this talk Eric Graham and Rachel Pedreschi will discuss these pipelines and cover the following topics:
-Network flow use cases and why this data is important.
-Reference architectures from production systems at a major international Bank.
-Why Kafka and Druid and other OSS tools for Network Flows.
-A demo of one such system.
CrateDB can help make working with sensor data at scale easier than ever. Join us as we take you from download through everything you need to know to put CrateDB to work with your sensor data.
What you'll learn:
How to set up your CrateDB instance
Database design - partitioning and sharding
How to insert, query and connect with CrateDB
Hasura 2.0 is our biggest release since we launched.
This webinar goes over the new capabilities in our v2 release:
- Connect to multiple databases
- Generate REST APIs
- Enhanced Authorisation
- Hasura Cloud & AWS VPC Peering
Linkurious Enterprise is compatible with Azure Cosmos DB and offers investigation teams a turnkey solution to detect and investigate threats hidden in graph data. In this post, we explain how Linkurious Enterprise connects to Cosmos DB graph database.
Yo. big data. understanding data science in the era of big data.Natalino Busa
We talk a lot these days about data science, and how it will pave our paths with beautiful insights and unexpected new relations and connections in our given datasets, and even across datasets.
But how to maintain the "Science" part in "Data Science"? After some time working in this field I appreciate more and more the critical thinking which has characterized the progress in science.
Hypothesis, facts, prove and/or disprove the thesis. This is how science has progressed in the past centuries. This method has been formalized by Popper and categorize as non-science all disciplines where the statements cannot be falsified. In other words, if a statement cannot be disproved, we cannot talk of science, since there is no mechanism to left to verify the solution or to prove it wrong.
When that happens the argument can still be accepted, but not scientifically accepted. Ways of accepting or refuting a non falsifiable statement are for instance based on aesthetic, authority or pragmatic or philosophical considerations. All valid but not scientific. This applies for instance to statements in the disciplines of politics, teology, ethics, etc.
Science has definitely progressed since then. For instance, Bayesian networks and statistical inductions are currently part of the arsenal of the (data) scientist weapons. But, no matter how the baseline is set, critical thinking and a rigorous method are definitely helpful in understanding the results produced by science in particular when this is based on large amount of data and computational in nature, rather than formula/model driven.
Data Science has currently many different connotations. On one side it praises the "artistry", the genius of laying out connections between disciplines and concepts. This is a truly great aspect of scientists and creativity is definitely very welcome in all data science profiles.
With the fun of creating new insights and new data golden eggs, a data scientist has to put up with those annoying criteria of reproducibility, falsifiability and peer reviewing. Sometimes these elements are postponed or left behind in name of the artistry. Granted, it's just hard to find metrics and baselines in order to compare models and data science solutions. But the scientific method has proven to be solid over the centuries and has proven to allow factual scientific discussion between scientists and a to allow selection between models based on objective agreed criteria.
Hands on experience in real-time data process with AWS Kinesis, Firehose, S3 ...Chuan-Yen Chiang
Hands-on experience in building real-time data process pipeline with AWS Kinesis, Firehose, S3 and Athena. And why we migrate our data analysis job from Google BigQuery to AWS Athena
Your Pay Cheque has nothing to do with your retirement savingsProf. Simply Simple
Small savings and investments help in accumulating for your Retirement corpus. Investing in equity mutual funds will help your money grow faster. START Investing early and STAY INVESTED in equity investments for a long time to gain the power of compounding.
Aristotle's Storytelling Framework for the WebJeroen van Geel
A step-by-step approach, using Aristotle’s view on Greek tragedy as it’s core, that will help designers design solid interactive projects that engage customers in the right way.
Viridian Red World Trade Center Quad - Smart crafted modern spaces that bring comfort and style to elevate your lifestyle
Viridian Red World Trade Center Quad is a proud presentation by Viridian RED. The project has contemporary design and detailed planning which is a proof of high quality architecture. The suave residential project is located in Greater Noida, Noida.
A whirlwind tour of the modules that any perl hacker, from beginner to experienced, should use and why.
Handout: List of modules in the talk along with many more: https://sites.google.com/site/perlhercynium/TEPHT-List2.pdf?attredirects=0
Austin Journal of Clinical Immunology is an open access, peer reviewed, scholarly journal dedicated to publish articles in all areas of immunology, asthma and allergy. The aim of the journal is to develop a knowledge sharing platform and an interactive network for immunologists, researchers, physicians, and other health professionals for exchange of scientific information in the areas of immunology.
Austin Journal of Clinical Immunology accepts original research articles, review articles, case reports, clinical images and rapid communication on all the aspects of immunology and immunotechnology.
Austin Journal of Clinical Immunology strongly supports the scientific upgradation and fortification in related scientific research community by enhancing access to peer reviewed scientific literary works. Austin Publishing Group also brings universally peer reviewed journals under one roof thereby promoting knowledge sharing, mutual promotion of multidisciplinary science.
Austin Journal of Clinical Immunology is an open access, peer reviewed, scholarly journal dedicated to publish articles in all areas of immunology, asthma and allergy.
OSDC 2017 | An Open Machine Data Analysis Stack with Docker, CrateDB, and Gr...NETWAYS
Predictive analytics, Internet of Things, Industry 4.0 - everybody has heard them at least once, but what do real installations look like? How can containerized Microservices help deployment and increase productivity? Claus from Crate.io will answer any and all of these questions and show real world examples with a stack based on Raspberry Pis, Grafana, Docker, and Rust.
The Hive Think Tank - The Microsoft Big Data Stack by Raghu Ramakrishnan, CTO...The Hive
Until recently, data was gathered for well-defined objectives such as auditing, forensics, reporting and line-of-business operations; now, exploratory and predictive analysis is becoming ubiquitous, and the default increasingly is to capture and store any and all data, in anticipation of potential future strategic value. These differences in data heterogeneity, scale and usage are leading to a new generation of data management and analytic systems, where the emphasis is on supporting a wide range of very large datasets that are stored uniformly and analyzed seamlessly using whatever techniques are most appropriate, including traditional tools like SQL and BI and newer tools, e.g., for machine learning and stream analytics. These new systems are necessarily based on scale-out architectures for both storage and computation.
Hadoop has become a key building block in the new generation of scale-out systems. On the storage side, HDFS has provided a cost-effective and scalable substrate for storing large heterogeneous datasets. However, as key customer and systems touch points are instrumented to log data, and Internet of Things applications become common, data in the enterprise is growing at a staggering pace, and the need to leverage different storage tiers (ranging from tape to main memory) is posing new challenges, leading to caching technologies, such as Spark. On the analytics side, the emergence of resource managers such as YARN has opened the door for analytics tools to bypass the Map-Reduce layer and directly exploit shared system resources while computing close to data copies. This trend is especially significant for iterative computations such as graph analytics and machine learning, for which Map-Reduce is widely recognized to be a poor fit.
While Hadoop is widely recognized and used externally, Microsoft has long been at the forefront of Big Data analytics, with Cosmos and Scope supporting all internal customers. These internal services are a key part of our strategy going forward, and are enabling new state of the art external-facing services such as Azure Data Lake and more. I will examine these trends, and ground the talk by discussing the Microsoft Big Data stack.
Achieving Real-time Ingestion and Analysis of Security Events through Kafka a...Kevin Mao
Strata Hadoop World 2017 San Jose
Today’s enterprise architectures are often composed of a myriad of heterogeneous devices. Bring-your-own-device policies, vendor diversification, and the transition to the cloud all contribute to a sprawling infrastructure, the complexity and scale of which can only be addressed by using modern distributed data processing systems.
Kevin Mao outlines the system that Capital One has built to collect, clean, and analyze the security-related events occurring within its digital infrastructure. Raw data from each component is collected and preprocessed using Apache NiFi flows. This raw data is then written into an Apache Kafka cluster, which serves as the primary communications backbone of the platform. The raw data is parsed, cleaned, and enriched in real time via Apache Metron and Apache Storm and ingested into ElasticSearch, allowing operations teams to detect and monitor events as they occur. The refined data is also transformed into the Apache ORC data format and stored in Amazon S3, allowing data scientists to perform long-term, batch-based analysis.
Kevin discusses the challenges involved with architecting and implementing this system, such as data quality, performance tuning, and the impact of additional financial regulations relating to data governance, and shares the results of these efforts and the value that the data platform brings to Capital One.
Science as a Service: How On-Demand Computing can Accelerate DiscoveryIan Foster
My talk at ScienceCloud 2013 in NYC. Thanks to the organizers for the invitation to talk.
A bit of new material relative to previous talks posted, e.g., on Globus Genomics.
Big data solutions for advanced marketing analyticsNatalino Busa
Our retail banking market demands now more than ever to stay close to our customers, and to carefully understand what services, products, and wishes are relevant for each customer at any given time. This sort of marketing research is often beyond the capacity of traditional BI reporting frameworks. In this talk, we illustrate how we team up data scientists and big data engineers in order to create and scale distributed analyses on a big data platform. By using Hadoop and open source statistical language and tools such R and Python, we can execute a variety of machine learning algorithms, and scale them out on a distributed computing framework.
Data Analytics Week at the San Francisco Loft
Using Data Lakes
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
John Mallory - Principal Business Development Manager Storage (Object), AWS
Hemant Borole - Sr. Big Data Consultant, AWS
Introducing Open Distro for Elasticsearch - ADB201 - New York AWS SummitAmazon Web Services
Open Distro for Elasticsearch is a 100% open-source distribution of Elasticsearch, the popular search and analytics engine. In this session, we explore its many new advanced features—previously available only in commercial software—including encryption in transit, role-based access control (RBAC), event monitoring and alerting, SQL support, cluster diagnostics, and more. We also show you how you can join the Open Distro for Elasticsearch community to accelerate open innovation for Elasticsearch.
CrateDB can help make working with sensor data at scale easier than ever. Join us as we take you from download through everything you need to know to put CrateDB to work with your sensor data.
- How to set up your CrateDB instance
- Database design – partitioning and sharding
- How to insert, query and connect with CrateDB
How and when to scale your CrateDB cluster
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Level: Intermediate
Speakers:
Tony Nguyen - Senior Consultant, ProServe, AWS
Hannah Marlowe - Consultant - Federal, AWS
Introduces the Globus software-as-a-service for file transfer and data sharing. Includes step-by-step instructions for creating a Globus account, transferring a file, and setting up a Globus endpoint on your laptop.
Modeling data and best practices for the Azure Cosmos DB.Mohammad Asif
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. In this session we covered ,modeling of data using NOSQL cosmos database and how it's helpful for distributed application to maintain high availability ,scaling in multiple region and throughput.
NoSQL Strikes Back (An introduction to the dark side of your data)
A long time ago in a database far, far away...
SQL was the only option to save vast amounts of application data for a long period of time. There were always some rebellion activities, to overcome the SQL Empire, which brought a new hope, but all other ways of storing data were never more than a phantom menace.
Now Cosmos DB awakens and is ready for the revenge of the NoSQL.
During this talk, we will have a look at what Azure Cosmos DB is, what you can achieve with its possibilities and how to use it in a galactic environment of data and applications.
Join me and find your way to the right solution for your application.
May the data be with you!
Data warehousing is a critical component for analysing and extracting actionable insights from your data. Amazon Redshift allows you to deploy a scalable data warehouse in a matter minutes and start to analyse your data right away using your existing business intelligence tools. It’s a fast, fully-managed, and cost-effective data warehousing system. You can analyse all your data using standard SQL and your existing Business Intelligence (BI) tools. Amazon Redshift also includes Redshift Spectrum, allowing you to directly run SQL queries against exabytes of unstructured data in Amazon S3. In this, you will learn how to migrate from existing data warehouses, optimise schemas and load data efficiently. We will also cover analytics tools to help you build visualisations, perform ad-hoc analysis and quickly get business insights from your data.
Learning Objectives:
• Discover best practices for building a data warehouse using Amazon Redshift
• Learn to use Amazon QuickSight for Business Intelligence and AWS Glue for ETL.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Getting the most out of your containerized database
1. Containerized DBs
In a Machine Data Environment
(or how you get the most out of your containerized database)
DevOps Gathering, 24th March 2017
@claus__m
2. About
~2yrs at Crate.io
DevRel/Field Engineering/Support/
Integrations/…
Crate.io
Founded in 2013, ~25 people and growing
Offices
San Francisco, Berlin, Dornbirn (AT)
Talk to me about
Rust, Raspberry Pis, Tech!
@claus__m
4. Source: HPE Jun 2016
http://www.slideshare.net/penumuru/harness-the-power-of-big-data-with-oracle-63438438/9
@claus__m
5. Machine Data
Characteristics
Millions of data points/second
Streaming in from sensors, devices, logs, etc.
Data diversity
Structured & unstructured JSON, Blobs
Real-time query performance
Monitoring & alerting
Complex queries of big data volumes
With Terabytes of historic data
Growth
Adding sources often means exponential
growth @claus__m
6. Machine Data
Internet of Things
Sensors, cameras, ...
Wearables, Gadgets
Location data, interaction data, ...
Logs & Monitoring data
Component health monitoring, access logs, ...
Industry 4.0, Digitization
Production line insights, automation, ...
Vehicles
Location data, health data, ...
@claus__m
7. Clickdrive.io
Fleet management & vehicle tracking
Vehicle health and tracking data
High ingest rate
2,000 data points per car, per second
In-depth & real-time analysis
Predictive maintenance, accident
reconstruction, route/driver efficiency
@claus__m
8. Roomonitor
Smart apartments
Monitoring & control climate, occupancy, noise,
access
Better efficiency, safer environment
Alerts: AC/heating on with window open, noisy
neighbors, ...
@claus__m
9. Skyhigh Networks
Cloud access security broker (CASB)
Access logging for cloud services
Large data volumes & ingest
Billions of events per day from 600+
customers, 10s of thousands of concurrent
TCP connections
Machine data is the fingerprint of fraud
Unsupervised learning to find anomalies
@claus__m
15. Go Live
More users!
More sensors and users
Data storage
Slow and fast
Monitoring & Analytics
Two different subsystems
LOAD BALANCER
V
C
S
S
U
S S
U
NoSQL DBMessage
Queue
SQL DB
U
S
S
C
V
C
MONITORING
V
S
ANALYTICS
@claus__m
16. But ...
Even more users?
Horizontal scaling?
Maintenance & bug hunting?
Mostly via scheduled downtimes
Reporting?
Kafka? Elasticsearch?
Security?
Access control?
Expertise?
Knowledge transfer?
LOAD BALANCER
V
C
S
U
S S
U
NoSQL DBMessage
Queue
SQL DB
U
S
S
C
V
C
MONITORING
V
S
ANALYTICS
S
@claus__m
22. CrateDB Fundamentals
Disk-based index with
in-memory caching
Fast and efficient OS caching
Shards: Units of data
Concurrency by distributing
shards
Distributed query execution
engine
“Push down” queries
@claus__m
24. A better
setup!
Horizontal scalability
Scale out everything
Reduced tech stack
Get to know it quicker
Live reporting
Use ad-hoc
queries on
production data
Flexibility
Schema
Evolution not
required @claus__m
LOAD BALANCER
V
C
S
S
U
S S
U
U
S
S
C
V
C
MONITORING
V
S
ANALYTICS
25. A better
setup!
No single point of failure
As highly available as your service
Reduced network traffic
Better reliability
No queue
Work with
real data
DB isolation
Accessible only
from the host
@claus__m
LOAD BALANCER
V
C
S
S
U
S S
U
U
S
S
C
V
C
MONITORING
V
S
ANALYTICS
26. Live Demo
Docker Swarm
Orchestration across platforms
Eden Server (Rust!)
RESTful web service
Eden Client (Rust!)
ARM application for reading
temperature data from BMP180
Grafana
To draw up a nice dashboard
@claus__m
LOAD BALANCER
G
E
ME
Pi
E E
28. An Open Stack
for Machine Data w/ CrateDB
Ad-hoc analysis with SQL
Instant reporting on production
data
Integrates well
Legacy SQL applications included
Horizontally scalable
Container native, highly
availability
@claus__m