Learn how NamPower used FME to integrate spatial data from multiple sources into a SCADA Dashboard to enable engineers to make decisions under emergency conditions and to deliver critical data to staff while they are not in the office.
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Scaling Machine Learning with Apache SparkDatabricks
Spark has become synonymous with big data processing, however the majority of data scientists still build models using single machine libraries. This talk will explore the multitude of ways Spark can be used to scale machine learning applications. In particular, we will guide you through distributed solutions for training and inference, distributed hyperparameter search, deployment issues, and new features for Machine Learning in Apache Spark 3.0. Niall Turbitt and Holly Smith combine their years of experience working with Spark to summarize best practices for scaling ML solutions.
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Improving Apache Spark's Reliability with DataSourceV2Databricks
DataSourceV2 is Spark's new API for working with data from tables and streams, but "v2" also includes a set of changes to SQL internals, the addition of a catalog API, and changes to the data frame read and write APIs. This talk will cover the context for those additional changes and how "v2" will make Spark more reliable and predictable for building enterprise data pipelines. This talk will include: * Problem areas where the current behavior is unpredictable or unreliable * The new standard SQL write plans (and the related SPIP) * The new table catalog API and a new Scala API for table DDL operations (and the related SPIP) * Netflix's use case that motivated these changes
I have studied on Big Data analysis and found Hadoop is the best technology and most popular as well for it's distributed data processing approaches. I have gathered all possible information about various Hadoop distributions available in the market and tried to describe most important tools and their functionality in the Hadoop echosystems in this slide show. I have also tried to discuss about connectivity with language R interm of data analysis and visualization perspective. Hope you will be enjoying the whole!
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Scaling Machine Learning with Apache SparkDatabricks
Spark has become synonymous with big data processing, however the majority of data scientists still build models using single machine libraries. This talk will explore the multitude of ways Spark can be used to scale machine learning applications. In particular, we will guide you through distributed solutions for training and inference, distributed hyperparameter search, deployment issues, and new features for Machine Learning in Apache Spark 3.0. Niall Turbitt and Holly Smith combine their years of experience working with Spark to summarize best practices for scaling ML solutions.
Apache Spark Tutorial | Spark Tutorial for Beginners | Apache Spark Training ...Edureka!
This Edureka Spark Tutorial will help you to understand all the basics of Apache Spark. This Spark tutorial is ideal for both beginners as well as professionals who want to learn or brush up Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Introduction
2) Batch vs Real Time Analytics
3) Why Apache Spark?
4) What is Apache Spark?
5) Using Spark with Hadoop
6) Apache Spark Features
7) Apache Spark Ecosystem
8) Demo: Earthquake Detection Using Apache Spark
Improving Apache Spark's Reliability with DataSourceV2Databricks
DataSourceV2 is Spark's new API for working with data from tables and streams, but "v2" also includes a set of changes to SQL internals, the addition of a catalog API, and changes to the data frame read and write APIs. This talk will cover the context for those additional changes and how "v2" will make Spark more reliable and predictable for building enterprise data pipelines. This talk will include: * Problem areas where the current behavior is unpredictable or unreliable * The new standard SQL write plans (and the related SPIP) * The new table catalog API and a new Scala API for table DDL operations (and the related SPIP) * Netflix's use case that motivated these changes
I have studied on Big Data analysis and found Hadoop is the best technology and most popular as well for it's distributed data processing approaches. I have gathered all possible information about various Hadoop distributions available in the market and tried to describe most important tools and their functionality in the Hadoop echosystems in this slide show. I have also tried to discuss about connectivity with language R interm of data analysis and visualization perspective. Hope you will be enjoying the whole!
Spark Autotuning: Spark Summit East talk by Lawrence SpracklenSpark Summit
While the performance delivered by Spark has enabled data scientists to undertake sophisticated analyses on big and complex data in actionable timeframes, too often, the process of manually configuring the underlying Spark jobs (including the number and size of the executors) can be a significant and time consuming undertaking. Not only it does this configuration process typically rely heavily on repeated trial-and-error, it necessitates that data scientists have a low-level understanding of Spark and detailed cluster sizing information. At Alpine Data we have been working to eliminate this requirement, and develop algorithms that can be used to automatically tune Spark jobs with minimal user involvement,
In this presentation, we discuss the algorithms we have developed and illustrate how they leverage information about the size of the data being analyzed, the analytical operations being used in the flow, the cluster size, configuration and real-time utilization, to automatically determine the optimal Spark job configuration for peak performance.
RDF Linked Data - Automatic Exchange of BIM ContainersSafe Software
This presentation tells the story, and FME solutions of a Dutch Utility company for the automatic exchange of data containers containing RDF Linked data, BIM, and documents.
The presentation will focus on the non-traditional representation of RDF Linked Data and how this integrates with FME through SPARQL, Apache Jena, and a few customer-built transformers in FME.
This FME solution also uses my Excel switch-based method of directing the data flow (my presentation during the FME World Fair).
Presented at Strata San Jose 2018. Shares how Netflix enables business teams to perform cohort analysis on very large, high dimensional data by using Big Data and web application technologies such as Spark, Druid, Node, React, and D3
Personalized Job Recommendation System at LinkedIn: Practical Challenges and ...Benjamin Le
A Industry talk given at RecSys 2017 talking about Job Recommendations at LinkedIn and some of the challenges we faced and solved. https://recsys.acm.org/recsys17/industry-session-2/#content-tab-1-4-tab
Cost-Based Optimizer in Apache Spark 2.2 Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
Deep Learning for Recommender Systems with Nick pentreathDatabricks
In the last few years, deep learning has achieved significant success in a wide range of domains, including computer vision, artificial intelligence, speech, NLP, and reinforcement learning. However, deep learning in recommender systems has, until recently, received relatively little attention. This talks explores recent advances in this area in both research and practice. I will explain how deep learning can be applied to recommendation settings, architectures for handling contextual data, side information, and time-based models, and compare deep learning approaches to other cutting-edge contextual recommendation models, and finally explore scalability issues and model serving challenges.
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
Shuffle in Apache Spark is an intermediate phrase redistributing data across computing units, which has one important primitive that the shuffle data is persisted on local disks. This architecture suffers from some scalability and reliability issues. Moreover, the assumptions of collocated storage do not always hold in today’s data centers. The hardware trend is moving to disaggregated storage and compute architecture for better cost efficiency and scalability.
To address the issues of Spark shuffle and support disaggregated storage and compute architecture, we implemented a new remote Spark shuffle manager. This new architecture writes shuffle data to a remote cluster with different Hadoop-compatible filesystem backends.
Firstly, the failure of compute nodes will no longer cause shuffle data recomputation. Spark executors can also be allocated and recycled dynamically which results in better resource utilization.
Secondly, for most customers currently running Spark with collocated storage, it is usually challenging for them to upgrade the disks on every node to latest hardware like NVMe SSD and persistent memory because of cost consideration and system compatibility. With this new shuffle manager, they are free to build a separated cluster storing and serving the shuffle data, leveraging the latest hardware to improve the performance and reliability.
Thirdly, in HPC world, more customers are trying Spark as their high performance data analytics tools, while storage and compute in HPC clusters are typically disaggregated. This work will make their life easier.
In this talk, we will present an overview of the issues of the current Spark shuffle implementation, the design of new remote shuffle manager, and a performance study of the work.
Real Time Analytics for Big Data a Twitter Case StudyNati Shalom
Hadoop's batch-oriented processing is sufficient for many use cases, especially where the frequency of data reporting doesn't need to be up-to-the-minute. However, batch processing isn't always adequate, particularly when serving online needs such as mobile and web clients, or markets with real-time changing conditions such as finance and advertising.
In the same way that Hadoop was born out of large-scale web applications, a new class of scalable frameworks and platforms for handling real time streaming processing or real time analysis is born to handle the needs of large-scale location-aware mobile, social and sensor use.
Facebook, Twitter and Google have been pioneers in that arena and recently launched new analytics services designed to meet the real time needs.
In this session we will review the common patterns and architectures that drive these platforms and learn how to build a Twitter-like analytics system in a simple way using frameworks such as Spring Social, Active In-Memory Data Grid for Big Data event processing, and NoSQL database such as Cassandra or HBase for handling the managing the historical data.
Participants in this session will also receive a hands-on tutorial for trying out these patterns on their own environment.
A detailed post covering the topic including a reference to a code example illustrating the reference architecture is available below:
http://horovits.wordpress.com/2012/01/27/analytics-for-big-data-venturing-with-the-twitter-use-case/
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob LisiInfluxData
Flux, the new InfluxData data scripting and query language (formerly IFQL), super-charges queries both for analytics and data science. Jacob Lisi from Grafana Labs will give a quick overview of the language features as well as the moving parts for a working deployment. Grafana is an open source dashboard solution that shares Flux’s passion for analytics and data science. For that reason, they are very excited to showcase the new Flux support within Grafana, and a couple of common analytics use cases to get the most out of your data.
In this InfluxDays NYC 2019 talk, Jacob Lisi will share the latest updates they have made with their Flux builder in Grafana.
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...Sudeep Das, Ph.D.
In this talk, we will provide an overview of Deep Learning methods applied to personalization and search at Netflix. We will set the stage by describing the unique challenges faced at Netflix in the areas of recommendations and information retrieval. Then we will delve into how we leverage a blend of traditional algorithms and emergent deep learning methods and new types of embeddings, especially hyperbolic space embeddings, to address these challenges.
Spark Autotuning: Spark Summit East talk by Lawrence SpracklenSpark Summit
While the performance delivered by Spark has enabled data scientists to undertake sophisticated analyses on big and complex data in actionable timeframes, too often, the process of manually configuring the underlying Spark jobs (including the number and size of the executors) can be a significant and time consuming undertaking. Not only it does this configuration process typically rely heavily on repeated trial-and-error, it necessitates that data scientists have a low-level understanding of Spark and detailed cluster sizing information. At Alpine Data we have been working to eliminate this requirement, and develop algorithms that can be used to automatically tune Spark jobs with minimal user involvement,
In this presentation, we discuss the algorithms we have developed and illustrate how they leverage information about the size of the data being analyzed, the analytical operations being used in the flow, the cluster size, configuration and real-time utilization, to automatically determine the optimal Spark job configuration for peak performance.
RDF Linked Data - Automatic Exchange of BIM ContainersSafe Software
This presentation tells the story, and FME solutions of a Dutch Utility company for the automatic exchange of data containers containing RDF Linked data, BIM, and documents.
The presentation will focus on the non-traditional representation of RDF Linked Data and how this integrates with FME through SPARQL, Apache Jena, and a few customer-built transformers in FME.
This FME solution also uses my Excel switch-based method of directing the data flow (my presentation during the FME World Fair).
Presented at Strata San Jose 2018. Shares how Netflix enables business teams to perform cohort analysis on very large, high dimensional data by using Big Data and web application technologies such as Spark, Druid, Node, React, and D3
Personalized Job Recommendation System at LinkedIn: Practical Challenges and ...Benjamin Le
A Industry talk given at RecSys 2017 talking about Job Recommendations at LinkedIn and some of the challenges we faced and solved. https://recsys.acm.org/recsys17/industry-session-2/#content-tab-1-4-tab
Cost-Based Optimizer in Apache Spark 2.2 Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
Deep Learning for Recommender Systems with Nick pentreathDatabricks
In the last few years, deep learning has achieved significant success in a wide range of domains, including computer vision, artificial intelligence, speech, NLP, and reinforcement learning. However, deep learning in recommender systems has, until recently, received relatively little attention. This talks explores recent advances in this area in both research and practice. I will explain how deep learning can be applied to recommendation settings, architectures for handling contextual data, side information, and time-based models, and compare deep learning approaches to other cutting-edge contextual recommendation models, and finally explore scalability issues and model serving challenges.
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
Shuffle in Apache Spark is an intermediate phrase redistributing data across computing units, which has one important primitive that the shuffle data is persisted on local disks. This architecture suffers from some scalability and reliability issues. Moreover, the assumptions of collocated storage do not always hold in today’s data centers. The hardware trend is moving to disaggregated storage and compute architecture for better cost efficiency and scalability.
To address the issues of Spark shuffle and support disaggregated storage and compute architecture, we implemented a new remote Spark shuffle manager. This new architecture writes shuffle data to a remote cluster with different Hadoop-compatible filesystem backends.
Firstly, the failure of compute nodes will no longer cause shuffle data recomputation. Spark executors can also be allocated and recycled dynamically which results in better resource utilization.
Secondly, for most customers currently running Spark with collocated storage, it is usually challenging for them to upgrade the disks on every node to latest hardware like NVMe SSD and persistent memory because of cost consideration and system compatibility. With this new shuffle manager, they are free to build a separated cluster storing and serving the shuffle data, leveraging the latest hardware to improve the performance and reliability.
Thirdly, in HPC world, more customers are trying Spark as their high performance data analytics tools, while storage and compute in HPC clusters are typically disaggregated. This work will make their life easier.
In this talk, we will present an overview of the issues of the current Spark shuffle implementation, the design of new remote shuffle manager, and a performance study of the work.
Real Time Analytics for Big Data a Twitter Case StudyNati Shalom
Hadoop's batch-oriented processing is sufficient for many use cases, especially where the frequency of data reporting doesn't need to be up-to-the-minute. However, batch processing isn't always adequate, particularly when serving online needs such as mobile and web clients, or markets with real-time changing conditions such as finance and advertising.
In the same way that Hadoop was born out of large-scale web applications, a new class of scalable frameworks and platforms for handling real time streaming processing or real time analysis is born to handle the needs of large-scale location-aware mobile, social and sensor use.
Facebook, Twitter and Google have been pioneers in that arena and recently launched new analytics services designed to meet the real time needs.
In this session we will review the common patterns and architectures that drive these platforms and learn how to build a Twitter-like analytics system in a simple way using frameworks such as Spring Social, Active In-Memory Data Grid for Big Data event processing, and NoSQL database such as Cassandra or HBase for handling the managing the historical data.
Participants in this session will also receive a hands-on tutorial for trying out these patterns on their own environment.
A detailed post covering the topic including a reference to a code example illustrating the reference architecture is available below:
http://horovits.wordpress.com/2012/01/27/analytics-for-big-data-venturing-with-the-twitter-use-case/
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob LisiInfluxData
Flux, the new InfluxData data scripting and query language (formerly IFQL), super-charges queries both for analytics and data science. Jacob Lisi from Grafana Labs will give a quick overview of the language features as well as the moving parts for a working deployment. Grafana is an open source dashboard solution that shares Flux’s passion for analytics and data science. For that reason, they are very excited to showcase the new Flux support within Grafana, and a couple of common analytics use cases to get the most out of your data.
In this InfluxDays NYC 2019 talk, Jacob Lisi will share the latest updates they have made with their Flux builder in Grafana.
Deeper Things: How Netflix Leverages Deep Learning in Recommendations and Se...Sudeep Das, Ph.D.
In this talk, we will provide an overview of Deep Learning methods applied to personalization and search at Netflix. We will set the stage by describing the unique challenges faced at Netflix in the areas of recommendations and information retrieval. Then we will delve into how we leverage a blend of traditional algorithms and emergent deep learning methods and new types of embeddings, especially hyperbolic space embeddings, to address these challenges.
Tech Talk: Five Simple Steps to a More Powerful Database ExperienceCA Technologies
This interactive discussion is for customers who have recently upgraded, are currently upgrading or considering upgrading to the most current production release. As technology evolves, the way people work continues to change. To make an impact, you need to think creatively and innovate boldly. Join us to hear about five simple steps towards a more powerful CA IDMS™ and CA Datacom® mainframe database experience—that help to improve performance, scalability, platform support, standards compliance and usability.
For more information, please visit http://cainc.to/Nv2VOe
Case Study: Datotel Extended the Power of Infrastructure Management to the Ph...CA Technologies
Learn how Datotel, a provider of cloud computing, co-location and Infrastructure as a Service (IaaS) is using CA DCIM to help run their data centers more efficiently.
For more information on DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
How to Store and Visualize CAN Bus Telematic Data with InfluxDB Cloud and Gra...InfluxData
CSS Electronics develop & manufacture professional-grade, simple-to-use CAN bus data loggers. Their plug-and-play CANedge2 logger records time-stamped raw CAN data to an extractable industrial SD card — and connects via WiFi/3G/4G access points to upload the data to the end user’s own servers. The CANedge2 is ideal for collecting automotive sensor metrics like speed, temperatures, state of charge, GPS and more. Learn how to create your own telematics dashboard built on InfluxDB in minutes by attending this webinar!
Join us as Martin Falch dives into:
CSS Electronics’ approach to improving R&D field testing, diagnostics, fleet management and predictive maintenance
The CANedge’s methodology for collecting IIoT data from cars, trucks and machines
How they process time series data from S3 via Python to store it in InfluxDB and visualize it with Grafana
FME: a Key Component of the Spatial DNA PlatformSafe Software
Spatial DNA is an integration platform built to connect any combination of systems to streamline workflow and access to accurate, timely and consistent data for smarter decisions. We built this platform specifically to support customers and partners in application integration - automating business processes with pre-built Smart Workflows. We have 20 Smart Workflows (and counting), automating processes such as material management, payment reconciliation, and asset design-build-operate handovers. Most integration technology, FME included, is a toolbox full of components to build data and application integration workflows. For many customers, they are looking for an off-the-shelf capability that is quick and easy to deploy, is vendor-supported, simplifies their implementations, and makes integration costs predictable.
Todd will discuss how we leverage the FME Platform under the covers to build out our integration capabilities. We will describe the "special sauce" we've added to alleviate customer frustration and enable plug-and-play integrations. In this talk, you'll learn the following:
- Customer use cases and examples
- Spatial DNA Platform and Smart Workflow architecture
- Key FME Platform components and patterns that we leverage
Using Mainframe Data in the Cloud: Design Once, Deploy Anywhere in a Hybrid W...Precisely
Your company is storing and processing more data in the cloud – and mainframe data is no exception. Whether you’re centralizing enterprise data for analytics, streaming it to real-time cloud-native applications, or archiving for regulatory compliance, you know your mainframe data has to be included. Unfortunately, as with most mainframe initiatives, this is easier said than done!
View this webcast on-demand to learn some practical strategies for leveraging mainframe data in the cloud. We will cover:
• Common use cases for mainframe data in the cloud
• Challenges in using mainframe data in the cloud – and how to solve them
• How to get started
Improve Operational Efficiency in AEC with Data IntegrationSafe Software
With the fast-rising adoption of digital technologies, architecture, engineering, and construction (AEC) companies have started to leverage data to achieve operational ease by integrating data across systems, streamlining project workflows, and making data accessible.
Join us as we walk through the AEC project lifecycle and explore how FME can help you increase operational efficiency at each stage of the project. Through customer stories and live demos, we will explore tips for overcoming common data challenges using FME including:
- Creating data transformation workflows, such as CAD to GIS, point clouds to 3D models, and integrating data for facilities management.
- Automating data integration workflows without any coding.
We will run a Q&A session at the end to answer any questions you might have. Make sure to tune in!
Many organizations focus on the licensing cost of Hadoop when considering migrating to a cloud platform. But other costs should be considered, as well as the biggest impact, which is the benefit of having a modern analytics platform that can handle all of your use cases. This session will cover lessons learned in assisting hundreds of companies to migrate from Hadoop to Databricks.
From Outdoor to Indoor: 3D and Venue Mapping – FME Summer CampSafe Software
Indoor mapping is an exciting new opportunity for business, but it does not come without challenges. With complex data conversion, the merging of spatial and tabular data, and the added difficulty of changing venues, it can turn any venue into a Mess Hall. By adding new Readers and Writers to FME 2018–including IMDF–we're making it easy for you to validate, update, automate, and analyze indoor mapping data.
Moments after you move data into your Hadoop cluster or target database, new transactions on source systems make that data incomplete, and analyses done on that data inaccurate.
However, there are several strategies for keeping data in sync between data platforms.
View this webcast on-demand to learn about the advantages and disadvantages of various change data capture strategies, as well as:
• The latest improvements in Syncsort DMX and DMX-h
• The new DMX Change Data Capture software
• How Syncsort can help you keep your data analytics current and accurate
It is no longer efficient, nor even possible, to properly manage your infrastructure with manual processes performed in an ad hoc, incident-based manner. You must be able to continuously monitor, assess, adjust and restructure every part of your multiplatform, distributed, interconnected and internet-dependent cyber-multiverse to respond to constantly changing business requirements.
Elevate Capacity Management (formerly Athene) provides leading companies with the cross-platform capacity management solution they need to meet their capacity management challenges. The new release of Elevate Capacity Management adds new features to ensure data integrity, improve data filtering, and provide more flexibility in customizing the most important thresholds in your IT environment.
View this webinar on-demand and learn about these new features including:
• Performance enhancement for large scale data ingestion and reporting
• The ability to use virtually any metric as a threshold for monitoring and alerting
• A faster and more scalable multi-threaded data management architecture
Learn more about SCADA expert ClearSCADA:
- Simplicity & Enhanced User Experience for faster deployment and improved time-to-market
- Reduced Maintenance Efforts for protection of investment
- Enhanced Security capability for better protection of the system
- Enhanced Operational Intelligence to help optimize operations and maintenance activities
- Integrated with the complete Schneider Electric Telemetry portfolio
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
In the ever-evolving landscape of data management, Zero-ETL is an approach that is reshaping how businesses handle and integrate their data. This webinar explores Zero-ETL, a paradigm shift from the traditional Extract, Transform, Load (ETL) process, offering a more streamlined, efficient, and real-time data integration method.
We will begin with an introduction to the concept of Zero-ETL, including how it allows direct access to data in its native environment and real-time data transformation, providing up-to-date information with significantly reduced data redundancy.
Next, we'll take you through several demonstrations showing how Zero-ETL can deliver real-time data and enable the free movement of data between systems. We will also discuss the various tools that support all aspects of Zero-ETL, providing attendees with an understanding of how they can adopt this innovative approach in their organizations.
Lastly, the session will conclude with an interactive Q&A segment, allowing participants to gain deeper insights into how Zero-ETL can be tailored to their specific business needs and how they can get started today.
Join us to discover how Zero-ETL can elevate your organization's data strategy.
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
Following the popularity of “Cloud Revolution: Exploring the New Wave of Serverless Spatial Data,” we’re thrilled to announce this much-anticipated encore webinar.
In this sequel, we’ll dive deeper into the Cloud-Native realm by uncovering practical applications and FME support for these new formats, including COGs, COPC, FlatGeoBuf, GeoParquet, STAC, and ZARR.
Building on the foundation laid by industry leaders Michelle Roby of Radiant Earth and Chris Holmes of Planet in the first webinar, this second part offers an in-depth look at the real-world application and behind-the-scenes dynamics of these cutting-edge formats. We will spotlight specific use-cases and workflows, showcasing their efficiency and relevance in practical scenarios.
Discover the vast possibilities each format holds, highlighted through detailed discussions and demonstrations. Our expert speakers will dissect the key aspects and provide critical takeaways for effective use, ensuring attendees leave with a thorough understanding of how to apply these formats in their own projects.
Elevate your understanding of how FME supports these cutting-edge technologies, enhancing your ability to manage, share, and analyze spatial data. Whether you’re building on knowledge from our initial session or are new to the serverless spatial data landscape, this webinar is your gateway to mastering cloud-native formats in your workflows.
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
Following the popularity of "Cloud Revolution: Exploring the New Wave of Serverless Spatial Data," we're thrilled to announce this much-anticipated encore webinar.
In this sequel, we'll dive deeper into the Cloud-Native realm by uncovering practical applications and FME support for these new formats, including COGs, COPC, FlatGeoBuf, GeoParquet, STAC, and ZARR.
Building on the foundation laid by industry leaders Michelle Roby of Radiant Earth and Chris Holmes of Planet in the first webinar, this second part offers an in-depth look at the real-world application and behind-the-scenes dynamics of these cutting-edge formats. We will spotlight specific use-cases and workflows, showcasing their efficiency and relevance in practical scenarios.
Discover the vast possibilities each format holds, highlighted through detailed discussions and demonstrations. Our expert speakers will dissect the key aspects and provide critical takeaways for effective use, ensuring attendees leave with a thorough understanding of how to apply these formats in their own projects.
Elevate your understanding of how FME supports these cutting-edge technologies, enhancing your ability to manage, share, and analyze spatial data. Whether you're building on knowledge from our initial session or are new to the serverless spatial data landscape, this webinar is your gateway to mastering cloud-native formats in your workflows.
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
Imagine a world where information flows as swiftly as thought itself, making decision-making as fluid as the data driving it. Every moment is critical, and the right tools can significantly boost your organization’s performance. The power of real-time data automation through FME can turn this vision into reality.
Aimed at professionals eager to leverage real-time data for enhanced decision-making and efficiency, this webinar will cover the essentials of real-time data and its significance. We’ll explore:
FME’s role in real-time event processing, from data intake and analysis to transformation and reporting
An overview of leveraging streams vs. automations
FME’s impact across various industries highlighted by real-life case studies
Live demonstrations on setting up FME workflows for real-time data
Practical advice on getting started, best practices, and tips for effective implementation
Join us to enhance your skills in real-time data automation with FME, and take your operational capabilities to the next level.
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
Imagine a world where information flows as swiftly as thought itself, making decision-making as fluid as the data driving it. Every moment is critical, and the right tools can significantly boost your organization's performance. The power of real-time data automation through FME can turn this vision into reality.
Aimed at professionals eager to leverage real-time data for enhanced decision-making and efficiency, this webinar will cover the essentials of real-time data and its significance. We'll explore:
FME's role in real-time event processing, from data intake and analysis to transformation and reporting
An overview of leveraging streams vs. automations
FME's impact across various industries highlighted by real-life case studies
Live demonstrations on setting up FME workflows for real-time data
Practical advice on getting started, best practices, and tips for effective implementation
Join us to enhance your skills in real-time data automation with FME, and take your operational capabilities to the next level.
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
Hiring and retaining software development talent is next to impossible for AEC firms and other industries alike.
Join us and guest speakers from HOK, a leader in the AEC industry, as they share their success in navigating the tight talent market through the use of no-code solutions and FME.
Discover how HOK approached the process of building a custom tool to automate the creation of projects and user management for Trimble Connect and ProjectSight.
Using a mix of traditional and no-code in FME, our guest speakers will reveal how the team bridged the resource gap and used the available talent pool, producing the mission-critical web app “Trajectory”.
They will also dive into details, illustrating first-hand how JSON data was used as a “glue” between two development groups.
Learn how embracing FME as a no-code solution can unlock potential within your teams, foster collaboration, and drive efficiency.
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
In an era where making swift, data-driven decisions can set industry leaders apart, understanding the world of data streaming and stream processing is crucial. During this webinar, we'll explore:
Stream Processing Overview: Dive into what stream processing entails and the value it brings organizations.
Stream vs. Batch Processing: Learn the key differences and benefits of stream processing compared to traditional batch processing, highlighting the efficiency of real-time data handling.
Mastering Data Volumes: Discover strategies for effectively managing both high and low volume data streams, ensuring optimal performance.
Boosting Operational Excellence: Explore how adopting data streaming can enhance your organization's operational workflows and productivity.
Spatial Data's Role in Streams: Understand the importance of spatial data in stream processing for more informed decision-making.
Interactive Demos: Watch practical demos, from dynamic geofencing to group-based processing.
Plus, we’ll show you how you can do it without coding! Register now to take the first step towards more informed, timely, and precise decision-making for your organization.
The Critical Role of Spatial Data in Today's Data EcosystemSafe Software
In today's data-driven landscape, integrating spatial data is becoming increasingly crucial for organizations aiming to harness the full potential of their data. Spatial data offers unique insights based on location, making it a fundamental component for addressing various challenges across different sectors, including urban planning, environmental sustainability, public health, and logistics.
Our webinar delves into the indispensable role of spatial data in data management and analysis. We'll showcase how omitting spatial data from your data strategy not only weakens your data infrastructure, but also limits the depth of your insights. Through real-world case studies, we'll highlight the transformative impact of spatial data, demonstrating its ability to uncover complex patterns, trends, and relationships.
Join us for this introductory-level webinar as we explore the critical importance of spatial data integration in driving strategic decision-making processes. By the end of the webinar, you'll gain a renewed perspective on how spatial data is essential for confronting and overcoming challenges across various domains.
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataSafe Software
Once in a while, there really is something new under the sun. The rise of cloud-hosted data has fueled innovation in spatial data storage, enabling a brand new serverless architectural approach to spatial data sharing. Join us in our upcoming webinar to learn all about these new ways to organize your data, and leverage data shared by others. Explore the potential of Cloud Native Geospatial Formats in your workflows with FME, as we introduce five new formats: COGs, COPC, FlatGeoBuf, GeoParquet, STAC and ZARR.
Learn from industry experts Michelle Roby from Radiant Earth and Chris Holmes from Planet about these cloud-native geospatial data formats and how they can make data easier to manage, share, and analyze. To get us started, they’ll explain the goals of the Cloud-Native Geospatial Foundation and provide overviews of cloud-native technologies including the Cloud-Optimized GeoTIFF (COG), SpatioTemporal Asset Catalogs (STAC), and GeoParquet.
Following this, our seasoned FME team will guide you through practical demonstrations, showcasing how to leverage each format to its fullest potential. Learn strategic approaches for seamless integration and transition, along with valuable tips to enhance performance using these formats in FME.
Discover how these formats are reshaping geospatial data handling and how you can seamlessly integrate them into your FME workflows and harness the explosion of cloud-hosted data.
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
Learn where FME meets AI in this upcoming webinar to offer you incredible time savings. This webinar is tailored to ignite imaginations and offer solutions to your data integration challenges. As the new digital era sets sail on the winds of AI, the tangibility of its integration in our daily schema is unfolding.
Segment 1, titled “AI: The Good, the Bad and the FME” by Darren Fergus of Locus, navigates through the realms of AI, scrutinizing its pervasive impact while underscoring the symbiotic potential of FME and AI. Join in an engaging demonstration as FME and ChatGPT collaboratively orchestrate a PowerPoint narrative, epitomizing the alliance of AI with human ingenuity.
In Segment 2, “Integrating GeoAI Models in FME” by Dennis Wilhelm and Dr. Christopher Britsch of con terra GmbH, the spotlight veers towards operationalizing AI in our daily tasks through FME. A practical approach to embedding GeoAI Models into FME Workspaces is unveiled, showcasing the ease of incorporating AI-driven methodologies into your FME workflows, skyrocketing productivity levels.
To follow, Segment 3, "Unleash generative AI on your terms!" by Oliver Morris of Avineon-Tensing. While the prospects of Generative AI are thrilling, security and IT reservations, especially with 'phone home' tools, are genuine concerns. However, with open-source tools, you can locally harness large language models. In this demo, we'll unravel the magic of local AI deployment and its seamless integration into an FME workspace.
Bonus! Dmitri will join us for a fourth segment to tie us off, showcasing what he has been up to this week, including using OpenAI API for texturing in FME, amoung other projects.
Join us to explore the synergy of FME and AI: opening portals to a realm of revolutionized productivity and enriched user experiences.
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
In the ever-evolving landscape of data management, Zero-ETL is an approach that is reshaping how businesses handle and integrate their data. This webinar explores Zero-ETL, a paradigm shift from the traditional Extract, Transform, Load (ETL) process, offering a more streamlined, efficient, and real-time data integration method.
We will begin with an introduction to the concept of Zero-ETL, including how it allows direct access to data in its native environment and real-time data transformation, providing up-to-date information with significantly reduced data redundancy.
Next, we'll take you through several demonstrations showing how Zero-ETL can deliver real-time data and enable the free movement of data between systems. We will also discuss the various tools that support all aspects of Zero-ETL, providing attendees with an understanding of how they can adopt this innovative approach in their organizations.
Lastly, the session will conclude with an interactive Q&A segment, allowing participants to gain deeper insights into how Zero-ETL can be tailored to their specific business needs and how they can get started today.
Join us to discover how Zero-ETL can elevate your organization's data strategy.
Mastering MicroStation DGN: How to Integrate CAD and GISSafe Software
Dive deep into the world of CAD-GIS integration with our expert-led webinar. Discover how to seamlessly transfer data between Bentley MicroStation and leading GIS platforms, such as Esri ArcGIS. This session goes beyond mere CAD/GIS conversion, showcasing techniques to precisely transform MicroStation elements including cells, text, lines, and symbology. We’ll walk you through tags versus item types, and understanding how to leverage both. You’ll also learn how to reproject to any coordinate system. Finally, explore cutting-edge automated methods for managing database links, and delve into innovative strategies for enabling self-serve data collection and validation services.
Join us to overcome the common hurdles in CAD and GIS integration and enhance the efficiency of your workflows. This session is perfect for professionals, both new to FME and seasoned users, seeking to streamline their processes and leverage the full potential of their CAD and GIS systems.
Geospatial Synergy: Amplifying Efficiency with FME & EsriSafe Software
Dive deep into the world of geospatial data management and transformation in our upcoming webinar focusing on the powerful integration of FME and Esri technologies. This insightful session comprises two compelling segments aimed at enhancing your geospatial workflows, while minimizing operational hurdles.
In the first segment, guest speaker Jan Roggisch from Locus unveils how Auckland Council triumphed over the challenges of handling large, frequent data updates on ArcGIS Online using FME. Discover the journey from manual data handling to an automated, streamlined process that reduced server downtime from minutes to seconds: setting a new standard for local government organizations.
The second segment, led by James Botterill from 1Spatial, unveils the magic of incorporating ArcPy into your FME workflows. Delve into real-world scenarios where ArcGIS geoprocessing is harmoniously orchestrated within FME using the PythonCaller. Gain insights into raster-vector data conversion, spatial analysis, and a host of practical tips and tricks that empower you to leverage the combined capabilities of FME and Esri for efficient data manipulation and conversion.
Join us to explore the remarkable possibilities that open up when FME and Esri technologies converge – enhancing your ability to manage and transform geospatial data with unprecedented efficiency.
Introducing the New FME Community Webinar - Feb 21, 2024 (2).pdfSafe Software
Join us at Safe Software as we unveil the exciting new FME Community platform.
Picture yourself entering a vibrant, interconnected world, where every click brings you closer to a fellow FME enthusiast, a new idea, or a solution that could revolutionize your workflow.
Since its inception, the FME Community has been a dynamic hub for knowledge sharing, where thousands of users converge to exchange insights, engage in stimulating discussions, and collaboratively solve challenges. Now, envision this community reimagined - retaining the features you know and love, but infused with new, cutting-edge functionalities designed to make your experience even more enriching and effortless. The Community is also planned to soon act as a central hub for all FME community acticity across the web.
This webinar is your personal tour through this enhanced FME Community landscape. Whether you're an experienced user familiar with every nook and cranny of the old platform, or you're setting foot in this community for the first time, our webinar will ensure you navigate the new terrain with ease and confidence. Discover how to maximize your engagement, tap into the wealth of resources available, and contribute to the growing tapestry of FME innovation.
Join us in celebrating the future of FME collaboration, where your next breakthrough idea, insightful article, or spirited discussion awaits. Don't miss this opportunity to be a part of the evolution of the FME Community!
Breaking Barriers & Leveraging the Latest Developments in AI TechnologySafe Software
Explore how to best leverage the latest of AI technology in our upcoming webinar, where we delve into advancements and trends in the field since our previous AI webinars in 2023. Join us for a session filled with fresh insights and practical knowledge. We're stitching together the final threads of this presentation as we speak, keeping pace with AI's breakneck speed. Expect a session brimming with the freshest insights, releases and breakthroughs in AI – right up to the minute! A spotlight of this session is set to include Dmitri Bagh’s exploration of innovative AI integrations with FME, ranging from generating 3D features for augmented reality using Dall-E, to enhancing urban planning with orthoimagery completion, and showcasing the power of AI in workspace analysis and geoart creation.
Whether you're new to AI or an experienced practitioner, this webinar is tailored to keep you at the forefront of AI innovation. Get ready for a session that is as informative as it is inspiring, equipping you with the tools to excel in the dynamic world of artificial intelligence.
Best Practices to Navigating Data and Application Integration for the Enterpr...Safe Software
Navigating the complexities of managing vast enterprise data across multiple systems can be challenging. This webinar is your guide to navigating and simplifying enterprise integration.
As a technology leader, you may grapple with legacy systems, shadow IT, and budget constraints. Data and personnel silos often impede technological progress. FME champions integrating superior business systems to bolster your organization's digital strength – efficiently and affordably, using your current team and accessible services.
Join us and partner guest speakers from Seamless in an engaging session exploring the essential roles of data and systems in modern enterprises. We'll provide insights on achieving high-quality data management, establishing strong governance, and enabling teams to manage their data effectively. Delve into strategies for ensuring high-quality data and building robust governance structures, with tips and tricks along the way.
This webinar features real-life case studies demonstrating success in diverse industries. Learn cutting-edge strategies for data governance and system integration. Don't miss this opportunity to gain valuable insights and best practices for transforming your data governance and system integration processes.
Cloud Revolution: Exploring the New Wave of Serverless Spatial DataSafe Software
Once in a while, there really is something new under the sun. The rise of cloud-hosted data has fueled innovation in spatial data storage, enabling a brand new serverless architectural approach to spatial data sharing. Join us in our upcoming webinar to learn all about these new ways to organize your data, and leverage data shared by others. Explore the potential of Cloud Native Geospatial Formats in your workflows with FME, as we introduce five new formats: COGs, COPC, FlatGeoBuf, GeoParquet, STAC and ZARR.
Learn from industry experts Michelle Roby from Radiant Earth and Chris Holmes from Planet about these cloud-native geospatial data formats and how they can make data easier to manage, share, and analyze. To get us started, they’ll explain the goals of the Cloud-Native Geospatial Foundation and provide overviews of cloud-native technologies including the Cloud-Optimized GeoTIFF (COG), SpatioTemporal Asset Catalogs (STAC), and GeoParquet.
Following this, our seasoned FME team will guide you through practical demonstrations, showcasing how to leverage each format to its fullest potential. Learn strategic approaches for seamless integration and transition, along with valuable tips to enhance performance using these formats in FME.
Discover how these formats are reshaping geospatial data handling and how you can seamlessly integrate them into your FME workflows and harness the explosion of cloud-hosted data.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. Objectives
Integrate spatial data from
multiple sources into a SCADA
Dashboard.
● Enable engineers to make
decisions under emergency
conditions.
● Deliver critical data to staff
while they are not in the
office.
3. Challenges
● Integrate powerline voltage,
network data, power stations, and
vehicle GPS data.
● Reduce Effort to Maintain and
Expand SCADA Dashboards.
NamPower, Namibia’s national power utility, has for decades been a mainstay of the nation’s economy and is now positioned to be a main driver of Vision 2030, Namibia’s blueprint for broad-based, sustainable economic growth.
NamPower owns a world-class transmission system and network of overhead power lines spanning a distance of more than 25,000 kilometers, one of the longest of its kind in the world and enough to circle a continent. The national grid has been homegrown – designed and largely built by Namibians.
NamPower’s employees needed to be able to observe live power flows in their network. These could originally only be accessed using the microSCADA system, which required each user to have a license for the software. The new SCADA Dashboard offering provides this information without requiring additional microSCADA licenses, and makes it possible to view the flows over the Internet on any device at any time, without having to be at the office. Employees can use this tool to analyse the network. For example, it enables engineers to make decisions under emergency conditions, especially when they are not in office.
Once the platform was created, other opportunities arose to display more spatial data. The dashboard needed to be expanded to track all company vehicles and display relevant statistics, such as vehicle type and registration, current speed, ignition status and driver name. Renewable power plants also needed to be added to the system to display current generation.
The purpose of the SCADA Dashboard was not to replace any current systems, but rather to combine data from various systems to obtain meaningful information.
For this dashboard to be effective, NamPower needed to integrate data from multiple sources:
Powerline voltage from SCADA
Powerline network data, including transmission lines and power stations in Smallworld GIS
and vehicle GPS data so they could see where the nearest crews were in the event repairs are needed
This SCADA dashboard was initially written with about 1,800 lines of magik code, which required regular updates to the code as changes in the network were made. Such changes took a long time to implement to ensure that the logic in the code remained functioning as expected. Francois looked for a better solution.
To replace magik code, Francois chose to create FME workspaces because they are straight forward to maintain and expand.
An FME workspace was created to read power line voltages from the SCADA system, merge this with the correct power lines from the Smallworld GIS database and then display it in an HTML web page with a Google Map section to display the spatial data. Major substations are also displayed on the map. An HTTP Caller was used to read the vehicle tracking data in REST format from a cloud database and display it as an additional layer on the map.
Another workspace is responsible to read the current generation output for renewable power plants from the SCADA system, create a separate table to display these values and also create another KML layer for display in the SCADA Dashboard.
Francois found it helpful to use the FeatureMerger transformer when matching, comparing, or filtering objects from two sources.
He also chose to use the XMLTemplater transformer for creating HTML tables in the dashboard, as it gave him more control over his output than the HTMLReportGenerator. He shared his findings in the FME Community, and received helpful ideas back which made the work even easier.
(For presenter’s reference, here is the thread:)
https://knowledge.safe.com/questions/63652/dashboarding-creating-html-tables-and-formatting-i.html
The ListBuilder Transformer worked well to index the objects that he read from the SCADAOut file and then place them where he wanted in the HTML file using the fme:get-attribute function in the XMLTemplater (ie. something like {fme:get-attribute(concat("_list{",$index,"}.OV"))} )
The ever-popular HTTPCaller transformer was another transformer Francois found useful. He said it made making REST calls very easy to link up their Vehicle tracking data.
NamPower’s management team and engineers now rely on the SCADA Dashboard to provide them with the status of their electricity network when they are not in office. Although it does not replace any engineering tools, it adds value to day to day operations for monitoring their system.
Most notably, the Google Dynamic Map implementation is now possible, which previously couldn’t be done in the code-based version of the dashboard - previously, only a static map could be displayed.
In the dashboard, the interactive maps are shown on the right hand side, and power flows are on the left.
With these dynamic maps, different map layers or KML layers may be turned on or off, and areas of interest may be zoomed in. The map is now interactive.
In this screenshot, a network link file was able to be created to dynamically show power line flows within Google Earth, right from the SCADA dashboard.
And in this screenshot, vehicle locations are shown and can be interacted with.
For example, the closest vehicle to a fault on a power line can be identified with the Dashboard and radioed for assistance to attend to the issue.
When asked about the benefits they’ve experienced with this solution, Francois pointed out the speed of design and ease of maintaining and expanding the dashboard. The FME version of the dashboard is almost codeless, apart from the HTML bit of code that is written in the XMLTemplater transformers. This makes maintaining and expanding of the dashboard fast and effective and minimises logical errors that could be made, which was difficult to maintain with the code based version of the dashboard.
It’s now very easy to add new types of spatial data to the dashboard with very little effort. Instead of writing more code, Francois only needed to add a few transformers to handle additional spatial data, such as the vehicle tracking information or renewable power plants.
For example, a full screen map was added to the Dashboard with little effort. A renewables dashboard based on the principles of the SCADA Dashboard could be created within one day.
Requests and ideas to expand the dashboard are arising due to the usefulness of the current platform. These include expansion to include all SCADA system screens, and an analysis on renewable power plants in terms of generation based upon their location.
Finally,FME ensures that the Dashboard runs with maximum up-time. If the server for some reason has to restart, FME Server automatically restarts all scheduled workspaces.