The document discusses the Rails asset pipeline and Sprockets gem. It summarizes that the asset pipeline packages and minifies JavaScript and CSS assets, manages dependencies between assets, and provides a preprocessor pipeline. It also describes Sprockets directives like //= require that search for and include assets. Gems like turbo-sprockets-rails3 and quiet_assets can improve the asset pipeline by speeding up asset precompilation and hiding asset requests in logs.
This presentation gives an overview of the Apache Gobblin project. It explains Apache Gobblin in terms of it's architecture, data sources/sinks and it's work unit processing.
Links for further information and connecting
http://www.amazon.com/Michael-Frampton/e/B00NIQDOOM/
https://nz.linkedin.com/pub/mike-frampton/20/630/385
https://open-source-systems.blogspot.com/
Slides from my presentation at WellRailed (27th July 2011)
Additional Links: http://ryanbigg.com/guides/asset_pipeline.html
twitter: @static_storm
blog: http://incitecode.com
ArangoDB is an open source, multi-model NoSQL database that is written in C++ and embeds Google's V8 engine to implement the higher levels of its functionality in JavaScript. Recently we decided to switch from C++03 to C++11 for the database kernel. In this talk I will first give a short overview of the software architecture of ArangoDB and proceed to tell you about our practical experiences with the switch to C++11. I will explain which of the parts of the "new" standard have been more important and which have been less useful, and I will report about the difficulties we encountered.
This presentation gives an overview of the Apache Gobblin project. It explains Apache Gobblin in terms of it's architecture, data sources/sinks and it's work unit processing.
Links for further information and connecting
http://www.amazon.com/Michael-Frampton/e/B00NIQDOOM/
https://nz.linkedin.com/pub/mike-frampton/20/630/385
https://open-source-systems.blogspot.com/
Slides from my presentation at WellRailed (27th July 2011)
Additional Links: http://ryanbigg.com/guides/asset_pipeline.html
twitter: @static_storm
blog: http://incitecode.com
ArangoDB is an open source, multi-model NoSQL database that is written in C++ and embeds Google's V8 engine to implement the higher levels of its functionality in JavaScript. Recently we decided to switch from C++03 to C++11 for the database kernel. In this talk I will first give a short overview of the software architecture of ArangoDB and proceed to tell you about our practical experiences with the switch to C++11. I will explain which of the parts of the "new" standard have been more important and which have been less useful, and I will report about the difficulties we encountered.
Backbonification for dummies - Arrrrug 10/1/2012Dimitri de Putte
This presentation was given on the Arrrrug meeting as a first introduction to backbone.js in combination with rails after playing a couple of weeks with backbone.js.
Note: It is really on introduction level, in the meantime, my level of backbone.js and coffeescript have increased.
A production project's architecture with clojureJordi Llonch
We will describe a project's architecture that aims to have a design that let scale functionalities without too much accidental complexity. The architecture picks ideas from DDD and CQRS.
A successful pipeline moves data efficiently, minimizing pauses and blockages between tasks, keeping every process along the way operational. Apache Airflow provides a single customizable environment for building and managing data pipelines, eliminating the need for a hodge-podge collection of tools, snowflake code, and homegrown processes. Using real-world scenarios and examples, Data Pipelines with Apache Airflow teaches you how to simplify and automate data pipelines, reduce operational overhead, and smoothly integrate all the technologies in your stack.
Check out the contents on our browser-based liveBook reader here: https://livebook.manning.com/book/data-pipelines-with-apache-airflow/
HBaseCon2017 Community-Driven Graphs with JanusGraphHBaseCon
Graphs are well-suited for many use cases to express and process complex relationships among entities in enterprise and social contexts. Fueled by the growing interest in graphs, there are various graph databases and processing systems that dot the graph landscape. JanusGraph is a community-driven project that continues the legacy of Titan, a pioneer of open source graph databases. JanusGraph is a scalable graph database optimized for large scale transactional and analytical graph processing. In the session, we will introduce JanusGraph, which features full integration with the Apache TinkerPop graph stack. We will discuss JanusGraph's optimized storage model that relies on HBase for fast graph transversal and processing.
by Jason Plurad and Jing Chen He of IBM
Dynamic Class-Based Spark Workload Scheduling and Resource Using YARN with L...Databricks
While working with large enterprises that have several users running Apache Spark applications on a shared cluster, we have observed that they often run into problems prioritizing workloads and achieving their end-users and application Service Level Agreement (SLA) on account of resource contention. Configuring YARN queues is the first step, but that alone cannot efficiently take into account job priority as well as job SLA’s. Mitylytics has developed dynamic policy based job scheduling techniques that have been integrated with our machine learning service to fine-tune job scheduling.
We illustrate this with our tool called “mity-submit” that can be used as a wrapper around any submission script to take advantage of these scheduling techniques for optimized Spark Job execution in a multi-tenant environment. With these techniques, Spark jobs execution performance improved. We have significantly improve throughput for high priority jobs while “best-effort” queues are not starved. In addition to this, CPU and Memory utilization of the Cluster was at an optimal level.
oracle rac training | oracle rac training videos | oracle rac dba trainingNancy Thomas
Website: http://www.todaycourses.com
RAC Online Training Concepts :
Identify Real Application Clusters components
Understand Real Application Clusters
Clusters Scalability and High Availability
The Necessity of Global Resources
Parallel Execution with RAC
RAC Software and Database Principles
RAC and Shared Storage Technologies
Understand VIPs
Install, create, administer, and monitor a Real Application Clusters database
Describe the installation of Oracle RAC 10g
Perform RAC pre-installation tasks
Perform cluster setup tasks
Install Oracle Clusterware
Install and configure Automatic Storage Management (ASM)
Install the Oracle database software
Create a cluster database
Install the Enterprise Manager agent on each cluster node
Use configuration and management tools for Real Application Clusters databases
Use Enterprise Manager cluster database pages
Define redo log files in a RAC environment
Define undo tablespaces in a RAC environment
Start and stop RAC databases and instances
Modify initialization parameters in a RAC environment
Manage ASM instances in a RAC environment
Develop a backup and recovery strategy for Real Application Clusters databases
Configure the RAC database to use ARCHIVELOG mode and the flash recovery area
Configure RMAN for the RAC environment
Configure and monitor Oracle Clusterware resources
Manually control the Oracle Clusterware stack
Change voting disk and OCR configuration
Back up or recover your voting disks and OCR files
Change VIP addresses
Use the CRS framework
Review high availability best practices
Add a new node to your cluster database
Remove a node from your cluster database
Decide on the best ASM configuration to use
Patch your RAC system in a rolling fashion
High Availability Architecture
oracle rac training, oracle rac training videos, oracle rac dba training, , what is oracle rac?, oracle rac dba online traininig, real application clusters, spotlight on rac, real application cluster, oracle rac training in chennai, oracle rac training pune, oracle rac training videos, oracle rac training in mumbai, oracle rac training in delhi, oracle rac dba, oracle dba training, oracle performance, solve rac problems, understand rac, rac performance, rac availability, toad for oracle, dba online training
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Tom Yitav (Co-Founder & CEO) @ CaStory:
We will talk about the use of GraphQL as an API layer and its deployment as AWS Lambda. We will see a demo of bootsrapping such a service using a CLI tool called create-graphql-app. We will also share some of the main pros and cons compared to non-serverless APIs, and benefits of going Serverless in a startup company.
Operation Migration: Migrating Static Content into Cascade Server with our ne...hannonhill
Ryan will introduce our new HTML migration tool (released in Cascade Server 7,4), discuss best practices for migrating content, and demo the tool in action. For those of you with dozens of sites that need to be moved over to Cascade Server, you won't want to miss this session.
The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB.
Backbonification for dummies - Arrrrug 10/1/2012Dimitri de Putte
This presentation was given on the Arrrrug meeting as a first introduction to backbone.js in combination with rails after playing a couple of weeks with backbone.js.
Note: It is really on introduction level, in the meantime, my level of backbone.js and coffeescript have increased.
A production project's architecture with clojureJordi Llonch
We will describe a project's architecture that aims to have a design that let scale functionalities without too much accidental complexity. The architecture picks ideas from DDD and CQRS.
A successful pipeline moves data efficiently, minimizing pauses and blockages between tasks, keeping every process along the way operational. Apache Airflow provides a single customizable environment for building and managing data pipelines, eliminating the need for a hodge-podge collection of tools, snowflake code, and homegrown processes. Using real-world scenarios and examples, Data Pipelines with Apache Airflow teaches you how to simplify and automate data pipelines, reduce operational overhead, and smoothly integrate all the technologies in your stack.
Check out the contents on our browser-based liveBook reader here: https://livebook.manning.com/book/data-pipelines-with-apache-airflow/
HBaseCon2017 Community-Driven Graphs with JanusGraphHBaseCon
Graphs are well-suited for many use cases to express and process complex relationships among entities in enterprise and social contexts. Fueled by the growing interest in graphs, there are various graph databases and processing systems that dot the graph landscape. JanusGraph is a community-driven project that continues the legacy of Titan, a pioneer of open source graph databases. JanusGraph is a scalable graph database optimized for large scale transactional and analytical graph processing. In the session, we will introduce JanusGraph, which features full integration with the Apache TinkerPop graph stack. We will discuss JanusGraph's optimized storage model that relies on HBase for fast graph transversal and processing.
by Jason Plurad and Jing Chen He of IBM
Dynamic Class-Based Spark Workload Scheduling and Resource Using YARN with L...Databricks
While working with large enterprises that have several users running Apache Spark applications on a shared cluster, we have observed that they often run into problems prioritizing workloads and achieving their end-users and application Service Level Agreement (SLA) on account of resource contention. Configuring YARN queues is the first step, but that alone cannot efficiently take into account job priority as well as job SLA’s. Mitylytics has developed dynamic policy based job scheduling techniques that have been integrated with our machine learning service to fine-tune job scheduling.
We illustrate this with our tool called “mity-submit” that can be used as a wrapper around any submission script to take advantage of these scheduling techniques for optimized Spark Job execution in a multi-tenant environment. With these techniques, Spark jobs execution performance improved. We have significantly improve throughput for high priority jobs while “best-effort” queues are not starved. In addition to this, CPU and Memory utilization of the Cluster was at an optimal level.
oracle rac training | oracle rac training videos | oracle rac dba trainingNancy Thomas
Website: http://www.todaycourses.com
RAC Online Training Concepts :
Identify Real Application Clusters components
Understand Real Application Clusters
Clusters Scalability and High Availability
The Necessity of Global Resources
Parallel Execution with RAC
RAC Software and Database Principles
RAC and Shared Storage Technologies
Understand VIPs
Install, create, administer, and monitor a Real Application Clusters database
Describe the installation of Oracle RAC 10g
Perform RAC pre-installation tasks
Perform cluster setup tasks
Install Oracle Clusterware
Install and configure Automatic Storage Management (ASM)
Install the Oracle database software
Create a cluster database
Install the Enterprise Manager agent on each cluster node
Use configuration and management tools for Real Application Clusters databases
Use Enterprise Manager cluster database pages
Define redo log files in a RAC environment
Define undo tablespaces in a RAC environment
Start and stop RAC databases and instances
Modify initialization parameters in a RAC environment
Manage ASM instances in a RAC environment
Develop a backup and recovery strategy for Real Application Clusters databases
Configure the RAC database to use ARCHIVELOG mode and the flash recovery area
Configure RMAN for the RAC environment
Configure and monitor Oracle Clusterware resources
Manually control the Oracle Clusterware stack
Change voting disk and OCR configuration
Back up or recover your voting disks and OCR files
Change VIP addresses
Use the CRS framework
Review high availability best practices
Add a new node to your cluster database
Remove a node from your cluster database
Decide on the best ASM configuration to use
Patch your RAC system in a rolling fashion
High Availability Architecture
oracle rac training, oracle rac training videos, oracle rac dba training, , what is oracle rac?, oracle rac dba online traininig, real application clusters, spotlight on rac, real application cluster, oracle rac training in chennai, oracle rac training pune, oracle rac training videos, oracle rac training in mumbai, oracle rac training in delhi, oracle rac dba, oracle dba training, oracle performance, solve rac problems, understand rac, rac performance, rac availability, toad for oracle, dba online training
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Tom Yitav (Co-Founder & CEO) @ CaStory:
We will talk about the use of GraphQL as an API layer and its deployment as AWS Lambda. We will see a demo of bootsrapping such a service using a CLI tool called create-graphql-app. We will also share some of the main pros and cons compared to non-serverless APIs, and benefits of going Serverless in a startup company.
Operation Migration: Migrating Static Content into Cascade Server with our ne...hannonhill
Ryan will introduce our new HTML migration tool (released in Cascade Server 7,4), discuss best practices for migrating content, and demo the tool in action. For those of you with dozens of sites that need to be moved over to Cascade Server, you won't want to miss this session.
The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB.
Brief introduction to create a very simple application using AngularJS and Ruby on Rails. The app example is on Github:
https://github.com/elenatorro/BeersQuizz
Monitor Apache Spark 3 on Kubernetes using Metrics and PluginsDatabricks
This talk will cover some practical aspects of Apache Spark monitoring, focusing on measuring Apache Spark running on cloud environments, and aiming to empower Apache Spark users with data-driven performance troubleshooting. Apache Spark metrics allow extracting important information on Apache Spark’s internal execution. In addition, Apache Spark 3 has introduced an improved plugin interface extending the metrics collection to third-party APIs. This is particularly useful when running Apache Spark on cloud environments as it allows measuring OS and container metrics like CPU usage, I/O, memory usage, network throughput, and also measuring metrics related to cloud filesystems access. Participants will learn how to make use of this type of instrumentation to build and run an Apache Spark performance dashboard, which complements the existing Spark WebUI for advanced monitoring and performance troubleshooting.
Boost your productivity with Scala tooling!MeriamLachkar1
Our rich ecosystem provides developers with powerful tools that improve productivity on small or huge projects.
In this talk, I will present the tools that allow me to focus on my projects by making tedious tasks easier. From bootstrapping projects, to code linting and refactoring, from continuous integration and automatic publication and documentation rendering, come discover my favorite tools.
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
A presentation cum workshop on Real time Analytics with Apache Kafka and Apache Spark. Apache Kafka is a distributed publish-subscribe messaging while other side Spark Streaming brings Spark's language-integrated API to stream processing, allows to write streaming applications very quickly and easily. It supports both Java and Scala. In this workshop we are going to explore Apache Kafka, Zookeeper and Spark with a Web click streaming example using Spark Streaming. A clickstream is the recording of the parts of the screen a computer user clicks on while web browsing.
This introductory workshop is aimed at data analysts & data engineers new to Apache Spark and exposes them how to analyze big data with Spark SQL and DataFrames.
In this partly instructor-led and self-paced labs, we will cover Spark concepts and you’ll do labs for Spark SQL and DataFrames
in Databricks Community Edition.
Toward the end, you’ll get a glimpse into newly minted Databricks Developer Certification for Apache Spark: what to expect & how to prepare for it.
* Apache Spark Basics & Architecture
* Spark SQL
* DataFrames
* Brief Overview of Databricks Certified Developer for Apache Spark
Ruby on Rails 4 is out featuring Russian Doll caching (AKA Cache Digests). In this article, I apply Russian Doll caching to one of my poorer performing Rails 3 pages using the cache_digests gem.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
5. turbo-sprockets-rails3
● Speeds up rake assets:precompile by only
recompiling changed assets, based on a hash
of their source files
● Only compiles once to generate both
fingerprinted and non-fingerprinted assets
6. turbo-sprockets-rails3
Benchmark on a small Rails app:
● Uninstalled: 26.993s
● Installed
– first run: 18.525s
– unchanged assets: 9.386s
7. quiet_assets
● Hide asset requests in your Rails logs.
● Lets you focus on SQL queries and rendering
● Just add gem 'quiet_assets' to Gemfile
8. assets_precompile_enforcer
● Raises an exception if an asset is not found in
config.assets.precompile
● Avoid 500 errors in production due to
uncompiled assets
9. Assets in Rails 4
● Much faster, don't need turbo-sprockets-rails3
● Non-digest assets are no longer compiled if
fingerprints enabled
● Source maps support: Easily debug errors in
minified javascript
– http://www.html5rocks.com/en/tutorials/developert
ools/sourcemaps/
– If you give your asset source maps to Errbit, it
could show you where a production error occurred
in the original (unminified) JavaScript