This document summarizes a presentation about Terraform best practices and a deep dive into how it works. The presentation covers what Terraform is, how it can be used to implement infrastructure as code from manual processes to collaborative workflows, and why automating infrastructure provides benefits like faster deployments, increased control and predictability. It discusses best practices for Terraform configuration, implementation patterns like separating infrastructure from application code, and sample workflows for deploying infrastructure and platform services.
Red Hat Summit 2017 - LT107508 - Better Managing your Red Hat footprint with ...Miguel Pérez Colino
The Red Hat portfolio is well suited to deliver cloud solutions to customers. We're going beyond solution-building and delivery to improve operations by launching an effort to improve log aggregation. Learn how new capabilities can help you better manage your Red Hat footprint.
Pomerania Cloud case study - Openstack Day Warsaw 2017Łukasz Klimek
We have deployed Openstack-based public cloud, pomeraniacloud.pl, and integrated it with e-commerce solution based on Drupal 7 and Drupal Commerce.
This presentation contains summary of our work. It was presented during Openstack Day Warsaw 2017.
Red Hat Summit 2017 - LT107508 - Better Managing your Red Hat footprint with ...Miguel Pérez Colino
The Red Hat portfolio is well suited to deliver cloud solutions to customers. We're going beyond solution-building and delivery to improve operations by launching an effort to improve log aggregation. Learn how new capabilities can help you better manage your Red Hat footprint.
Pomerania Cloud case study - Openstack Day Warsaw 2017Łukasz Klimek
We have deployed Openstack-based public cloud, pomeraniacloud.pl, and integrated it with e-commerce solution based on Drupal 7 and Drupal Commerce.
This presentation contains summary of our work. It was presented during Openstack Day Warsaw 2017.
This is the speech Shen Li gave at GopherChina 2017.
TiDB is an open source distributed database. Inspired by the design of Google F1/Spanner, TiDB features in infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for data storage and analysis.
In this talk, we will mainly cover the following topics:
- What is TiDB
- TiDB Architecture
- SQL Layer Internal
- Golang in TiDB
- Next Step of TiDB
FOSSAsia 2016 - Shared storage management in the virtualization worldLiron Aravot
oVirt is management software for server and desktop virtualization, as such it to manage shared storage used by the system managed VMs which can run on different hypervisors. The challange to do so increases as the demand for high performance rises as well as the number of managed hypervisors/vms and the potential number point of failures. This session will focus in the different approaches taken in oVirt to manage shared storage in the virtualization world, the pros of each approach and its drawbacks.
OpenNebulaConf2017EU: Growing into the Petabytes for Fun and Profit by Michal...OpenNebula Project
Scale your OpenNebula into the Petabytes with LizardFS. Let us show you how to get from a small hyperconverged setup to a Petabyte cloud system utilising LizardFS with very little effort.
YouTube: https://youtu.be/T-6GMwjgQjs
OpenNebulaConf2017EU: Welcome Talk State and Future of OpenNebula by Ignacio ...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
YouTube: https://youtu.be/evzy5bLwDSM
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
You’ve spent considerable time picking your orchestrator, choosing the right cloud provider and configuring all the intricate details of your new Docker environment, but what about monitoring? In this talk we will cover the tools available on the market: upsides, downsides and upcoming changes. We’ll open the floor to questions, comments and feedback for each tool, so you have a complete view on the monitoring landscape.
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
MongoDB .local Houston 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented.
Oleksandr gives a Lightning talk about GIS integration: Storing and displaying geospatial data using OpenGeo Suite.
I would tell about:
- GIS itself;
- open source software called OpenGeo Suite;
- Geo spatial data representation in Geoserver (shapefiles and postgresql databases with postgis extension);
- Scalable data import using rest api.
Finally i would present the way to serve stored data using Open Layers.
This topic might be useful for Ruby Devs, GIS enthusiasts that are looking for free and powerful tools and other Backend devs.
What is a data platform? Why do we need one? And how to build one in the cloud? This talk covers the essential engineering facets of a data platform: flows, persistence, access, standardization and data processing. How these facets combine into a unified platform and how and what cloud technologies as managed services and serverless help/challenge us to build it into a powerful business tool.
These are slides from a presentation from a "code naturally" meetup we held on 30/4 2018.
This is the speech Shen Li gave at GopherChina 2017.
TiDB is an open source distributed database. Inspired by the design of Google F1/Spanner, TiDB features in infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for data storage and analysis.
In this talk, we will mainly cover the following topics:
- What is TiDB
- TiDB Architecture
- SQL Layer Internal
- Golang in TiDB
- Next Step of TiDB
FOSSAsia 2016 - Shared storage management in the virtualization worldLiron Aravot
oVirt is management software for server and desktop virtualization, as such it to manage shared storage used by the system managed VMs which can run on different hypervisors. The challange to do so increases as the demand for high performance rises as well as the number of managed hypervisors/vms and the potential number point of failures. This session will focus in the different approaches taken in oVirt to manage shared storage in the virtualization world, the pros of each approach and its drawbacks.
OpenNebulaConf2017EU: Growing into the Petabytes for Fun and Profit by Michal...OpenNebula Project
Scale your OpenNebula into the Petabytes with LizardFS. Let us show you how to get from a small hyperconverged setup to a Petabyte cloud system utilising LizardFS with very little effort.
YouTube: https://youtu.be/T-6GMwjgQjs
OpenNebulaConf2017EU: Welcome Talk State and Future of OpenNebula by Ignacio ...OpenNebula Project
We’re moving into a world of open cloud — where each organization can find the right cloud for its unique needs. A single cloud management platform can not be all things to all people, there will be a cloud space with several offerings focused on different environments and/or industries. The OpenNebula commitment to the open cloud flows directly out of its mission — to become the simplest cloud enabling platform — and its purpose — to bring simplicity to the private and hybrid enterprise cloud. OpenNebula exists to help companies build simple, cost-effective, reliable, open enterprise clouds on existing IT infrastructure. The OpenNebula Conference will be a great opportunity to remind our vision, vision and commitment, to look back at how the project has grown in the last 8 years, and to give a peek at what to expect from the project in the near future.
YouTube: https://youtu.be/evzy5bLwDSM
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
You’ve spent considerable time picking your orchestrator, choosing the right cloud provider and configuring all the intricate details of your new Docker environment, but what about monitoring? In this talk we will cover the tools available on the market: upsides, downsides and upcoming changes. We’ll open the floor to questions, comments and feedback for each tool, so you have a complete view on the monitoring landscape.
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
MongoDB .local Houston 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented.
Oleksandr gives a Lightning talk about GIS integration: Storing and displaying geospatial data using OpenGeo Suite.
I would tell about:
- GIS itself;
- open source software called OpenGeo Suite;
- Geo spatial data representation in Geoserver (shapefiles and postgresql databases with postgis extension);
- Scalable data import using rest api.
Finally i would present the way to serve stored data using Open Layers.
This topic might be useful for Ruby Devs, GIS enthusiasts that are looking for free and powerful tools and other Backend devs.
What is a data platform? Why do we need one? And how to build one in the cloud? This talk covers the essential engineering facets of a data platform: flows, persistence, access, standardization and data processing. How these facets combine into a unified platform and how and what cloud technologies as managed services and serverless help/challenge us to build it into a powerful business tool.
These are slides from a presentation from a "code naturally" meetup we held on 30/4 2018.
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
This talk goes over the host identification process we follow, the development of EyeWitness 1.0, the problems which lead to 2.0 and talk about future work on EyeWitness.
Monitoring and Scaling Redis at DataDog - Ilan Rabinovitch, DataDogRedis Labs
Think you have big data? What about high availability
requirements? At DataDog we process billions of data points every day including metrics and events, as we help the world
monitor the their applications and infrastructure. Being the world’s monitoring system is a big responsibility, and thanks to
Redis we are up to the task. Join us as we discuss how the DataDog team monitors and scales Redis to power our SaaS based monitoring offering. We will discuss our usage and deployment patterns, as well as dive into monitoring best practices for production Redis workloads
Disaster Recovery Experience at CACIB: Hardening Hadoop for Critical Financia...DataWorks Summit
Hadoop is becoming a standard platform for building critical financial applications such as risk reporting, trading and fraud detection. These applications require high level of SLAs (service-level agreement) in terms of RPO (Recovery Point Objective) and RTO (Recovery Time Objective). To achieve these SLAs, organizations need to build a disaster recovery plan that cover several layers ranging from the infrastructure to the clients going through the platform and the applications. In this talk, we will present the different architecture blueprints for disaster recovery as well as their corresponding SLA objectives. Then, we will focus on the stretch cluster solution that Crédit Agricole CIB is using in production. We will discuss the solution’s advantages, drawbacks and the impact of this approach on the global architecture. Finally, we will explain in detail how to configure and deploy this solution and how to integrate each layer (storage layer, processing layer...) into the architecture.
Building a data pipeline to ingest data into Hadoop in minutes using Streamse...Guglielmo Iozzia
Slides from my talk at the Hadoop User Group Ireland meetup on June 13th 2016: building a data pipeline to ingest data from sources of different nature into Hadoop in minutes (and no coding at all) using the Open Source Streamsets Data Collector tool.
How Sysbee Manages Infrastructures and Provides Advanced Monitoring by Using ...InfluxData
Discover how Sysbee helps organizations bring DevOps culture to small and medium enterprises. Their team helps their customers by improving stability, security, scalability — by providing cost-effective IT infrastructure. Learn how monitoring everything can improve your processes and simplify debugging!
Sysbee’s introspection on monitoring tools over the years
How TSDB’s, and specifically InfluxDB, fits into improving observability
Their approach to using the TICK Stack to improve the web hosting industry
Terraforming your Infrastructure on GCPSamuel Chow
A talk I gave at the Google Cloud Platform LA Meetup event at Google Playa Vista on Nov 6, 2019. This is a 1+ hour-long, tutorial-oriented talk on Infrastructure as Code (IaC), Terraform (as a toolset for IaC and modern devops), and leverage the practice and tools in defining, deploying, and managing your infrastructure in GCP.
DevOpsDays Tel Aviv DEC 2022 | Building A Cloud-Native Platform Brick by Bric...Haggai Philip Zagury
The overwhelming growth of technologies in the Cloud Native foundation overtook our toolbox and completely changed (well, really enhanced) the Developer Experience.
In this talk, I will try to provide my personal journey from the "Operator to Developer's chair" and the practices which helped me along my journey as a Cloud-Native Dev ;)
Scaling up uber's real time data analyticsXiang Fu
Realtime infrastructure powers critical pieces of Uber. This talk will discuss the architecture, technical challenges, learnings and how a blend of open source infrastructure (Apache Kafka/Flink/Pinot) and in-house technologies have helped Uber scale and enabled SQL to power realtime decision making for city ops, data scientists, data analysts and engineers.
Splunk, SIEMs, and Big Data - The Undercroft - November 2019Jonathan Singer
Guild members join us on Thursday November 14th at 6pm for our class on Splunk. Our Analyze Guild Master Jonathan Singer will be hitting on Centralized Logging, SEIM, Big Data, and much more.
Many architectures include both real-time and batch processing components. This often results in two separate pipelines performing similar tasks, which can be challenging to maintain and operate. We'll show how a single, well designed ingest pipeline can be used for both real-time and batch processing, making the desired architecture feasible for scalable production use cases.
Bonnier News is the largest news organisation in Sweden, publishing Dagens Nyheter and Expressen, two of the country’s largest newspapers. When we needed to build a new data processing platform that could accommodate the needs of many different, competing brands, we turned to Openshift and Kubernetes. In this presentation, we will describe the architectural tradeoffs and choices we made, and how we have been able to deploy data flows at a high rate by focusing on technical simplicity.
DevOpsDaysRiga 2018: Eric Skoglund, Lars Albertsson - Kubernetes as data plat...DevOpsDays Riga
"Bonnier News is the largest news organisation in Sweden, publishing Dagens Nyheter and Expressen, two of the country’s largest newspapers. When we needed to build a new data processing platform that could accommodate the needs of many different, competing brands, we turned to Openshift and Kubernetes. In this presentation, we will describe the architectural tradeoffs and choices we made, and how we have been able to deploy data flows at a high rate by focusing on technical simplicity." Lars Albertsson is an independent consultant for Bonnier News as well as Spotify, several startups and banks. He will talk together with Eric Skoglund, the product owner of Bonnier News’s data platform project.
Similar to Atmosphere 2018: Wojciech Krysmann- INFRA AS CODE - TERRAFORM DEEP DIVE AND BEST PRACTICES (20)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
15. 15
General best-practices
DO’s DONT’s
● Review plan prior to apply
● Save plan to file, and apply from it
● $ terraform fmt
● Enable bucket versioning for tfstate
● Do not use ‘-target’
● Do not keep too many resources in one
directory
● Do not create bucket per tfstate
● Don’t keep secrets in repo unencrypted
● Don’t try to build abstract / general
purpose modules
17. 17
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data
18. 18
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data
22. 22
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data
27. 27
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data
31. 31
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data
33. 33
Application
● Application code
● Runtime environment
Platform as a Service
● Instance
● Queue
● Database
Infrastructure as a Service
● VPC, Network, Gateways, ...
● DNS
● CDN
{
{Infrastructure
Service(s)
Feeds data