The document provides an overview of MySQL Cluster, a database clustering product. It describes the key components of MySQL Cluster - management nodes, data nodes, and SQL nodes. It explains how MySQL Cluster provides high availability and automatic partitioning of data across nodes. Benchmarks show that MySQL Cluster can scale out to improve performance and handle increased load by adding more nodes.
OSSCube MySQL Cluster Tutorial By Sonali At Osspac 09OSSCube
Sonali from OSSCube presents on MySQL Cluster Tutorial at OSSPAC 2009
OSSCube-Leading OpenSource Evangelist Company.
To know how we can help your business grow, contact:
India: +91 995 809 0987
USA: +1 919 791 5472
WEB: www.osscube.com
Mail: sales@osscube.com
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
The presentation show the new feature "Application Containers" which enables you to use the principles of the Multitenant Databases for your own applications. This is the perfect foundation for "Software as a service"
OSSCube MySQL Cluster Tutorial By Sonali At Osspac 09OSSCube
Sonali from OSSCube presents on MySQL Cluster Tutorial at OSSPAC 2009
OSSCube-Leading OpenSource Evangelist Company.
To know how we can help your business grow, contact:
India: +91 995 809 0987
USA: +1 919 791 5472
WEB: www.osscube.com
Mail: sales@osscube.com
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
The presentation show the new feature "Application Containers" which enables you to use the principles of the Multitenant Databases for your own applications. This is the perfect foundation for "Software as a service"
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015Deanna Kosaraju
Optimal Execution Of MapReduce Jobs In Cloud
Anshul Aggarwal, Software Engineer, Cisco Systems
Session Length: 1 Hour
Tue March 10 21:30 PST
Wed March 11 0:30 EST
Wed March 11 4:30:00 UTC
Wed March 11 10:00 IST
Wed March 11 15:30 Sydney
Voices 2015 www.globaltechwomen.com
We use MapReduce programming paradigm because it lends itself well to most data-intensive analytics jobs run on cloud these days, given its ability to scale-out and leverage several machines to parallel process data. Research has demonstrates that existing approaches to provisioning other applications in the cloud are not immediately relevant to MapReduce -based applications. Provisioning a MapReduce job entails requesting optimum number of resource sets (RS) and configuring MapReduce parameters such that each resource set is maximally utilized.
Each application has a different bottleneck resource (CPU :Disk :Network), and different bottleneck resource utilization, and thus needs to pick a different combination of these parameters based on the job profile such that the bottleneck resource is maximally utilized.
The problem at hand is thus defining a resource provisioning framework for MapReduce jobs running in a cloud keeping in mind performance goals such as Optimal resource utilization with Minimum incurred cost, Lower execution time, Energy Awareness, Automatic handling of node failure and Highly scalable solution.
With Apache Cassandra being a massively scalable open source NoSQL database and with the amount of data that we create and copy annually which is doubling in size every two years, it is expected to reach 44 zettabytes, or 44 trillion gigabytes, we can assume that sooner or later a DBA will be handling a Cassandra database in their shop. This beginner/intermediate-level session will take you through my journey of an Oracle DBA and my first 100 days of starting to administer a Cassandra Cluster, show several demos and all the roadblocks and the success I had along this path.
HDFS scalability and availability is limited by the single namespace server design. Giraffa is an experimental file system, which uses HBase to maintain the file system namespace in a distributed way and serves data directly from HDFS DataNodes. Giraffa is intended to provide higher scalabilty, availability, and maintain very large namespaces. The presentation will explain the Giraffa architecture, the motivation, will address its main challenges, and give an update on the status of the project.
Presenter: Konstantin Shvachko (PhD), Founder, AltoScale
Solving performance problems in MySQL without denormalizationdmcfarlane
As operational database schemas become complex, users resort to denormalization to handle performance issues. This includes a range of techniques from materialized views to using MySQL as a key-value store for blobs containing full objects. While denormalization solves immediate bottlenecks, it comes at a hefty price. In this presentation Ari will explore common denormalization approaches and tradeoffs using real world examples. He will then present a solution under development at Akiban Technologies to alleviate these same problems much more efficiently, and allow users to get the best of both worlds.
Get the best out of MySQL Cluster, presentation covers:
- Tuning and optimization to exploit the auto-sharded, distributed design of MySQL Cluster
- Using Adaptive Query Localization to scale cross-shard JOINs
- Data access patterns, schema and query optimizations
- Recommended tuning parameters
Tune in to the on-demand webinar: http://www.mysql.com/news-and-events/on-demand-webinars/display-od-719.html
The MapReduce model has become an important parallel processing model for large- scale data-intensive applications like data mining and web indexing. Hadoop, an open-source implementation of MapReduce, is widely applied to support cluster computing jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most map tasks can quickly access their local data. Network delays due to data movement during running time have been ignored in the recent Hadoop research. Unfortunately, both the homogeneity and data locality assumptions in Hadoop are optimistic at best and unachievable at worst, potentially introducing performance problems in virtualized data centers. We show in this dissertation that ignoring the data-locality issue in heterogeneous cluster computing environments can noticeably reduce the performance of Hadoop. Without considering the network delays, the performance of Hadoop clusters would be significatly downgraded. In this dissertation, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Apart from the data placement issue, we also design a prefetching and predictive scheduling mechanism to help Hadoop in loading data from local or remote disks into main memory. To avoid network congestions, we propose a preshuffling algorithm to preprocess intermediate data between the map and reduce stages, thereby increasing the throughput of Hadoop clusters. Given a data-intensive application running on a Hadoop cluster, our data placement, prefetching, and preshuffling schemes adaptively balance the tasks and amount of data to achieve improved data-processing performance. Experimental results on real data-intensive applications show that our design can noticeably improve the performance of Hadoop clusters. In summary, this dissertation describes three practical approaches to improving the performance of Hadoop clusters, and explores the idea of integrating prefetching and preshuffling in the native Hadoop system.
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015Deanna Kosaraju
Optimal Execution Of MapReduce Jobs In Cloud
Anshul Aggarwal, Software Engineer, Cisco Systems
Session Length: 1 Hour
Tue March 10 21:30 PST
Wed March 11 0:30 EST
Wed March 11 4:30:00 UTC
Wed March 11 10:00 IST
Wed March 11 15:30 Sydney
Voices 2015 www.globaltechwomen.com
We use MapReduce programming paradigm because it lends itself well to most data-intensive analytics jobs run on cloud these days, given its ability to scale-out and leverage several machines to parallel process data. Research has demonstrates that existing approaches to provisioning other applications in the cloud are not immediately relevant to MapReduce -based applications. Provisioning a MapReduce job entails requesting optimum number of resource sets (RS) and configuring MapReduce parameters such that each resource set is maximally utilized.
Each application has a different bottleneck resource (CPU :Disk :Network), and different bottleneck resource utilization, and thus needs to pick a different combination of these parameters based on the job profile such that the bottleneck resource is maximally utilized.
The problem at hand is thus defining a resource provisioning framework for MapReduce jobs running in a cloud keeping in mind performance goals such as Optimal resource utilization with Minimum incurred cost, Lower execution time, Energy Awareness, Automatic handling of node failure and Highly scalable solution.
With Apache Cassandra being a massively scalable open source NoSQL database and with the amount of data that we create and copy annually which is doubling in size every two years, it is expected to reach 44 zettabytes, or 44 trillion gigabytes, we can assume that sooner or later a DBA will be handling a Cassandra database in their shop. This beginner/intermediate-level session will take you through my journey of an Oracle DBA and my first 100 days of starting to administer a Cassandra Cluster, show several demos and all the roadblocks and the success I had along this path.
HDFS scalability and availability is limited by the single namespace server design. Giraffa is an experimental file system, which uses HBase to maintain the file system namespace in a distributed way and serves data directly from HDFS DataNodes. Giraffa is intended to provide higher scalabilty, availability, and maintain very large namespaces. The presentation will explain the Giraffa architecture, the motivation, will address its main challenges, and give an update on the status of the project.
Presenter: Konstantin Shvachko (PhD), Founder, AltoScale
Solving performance problems in MySQL without denormalizationdmcfarlane
As operational database schemas become complex, users resort to denormalization to handle performance issues. This includes a range of techniques from materialized views to using MySQL as a key-value store for blobs containing full objects. While denormalization solves immediate bottlenecks, it comes at a hefty price. In this presentation Ari will explore common denormalization approaches and tradeoffs using real world examples. He will then present a solution under development at Akiban Technologies to alleviate these same problems much more efficiently, and allow users to get the best of both worlds.
Get the best out of MySQL Cluster, presentation covers:
- Tuning and optimization to exploit the auto-sharded, distributed design of MySQL Cluster
- Using Adaptive Query Localization to scale cross-shard JOINs
- Data access patterns, schema and query optimizations
- Recommended tuning parameters
Tune in to the on-demand webinar: http://www.mysql.com/news-and-events/on-demand-webinars/display-od-719.html
The MapReduce model has become an important parallel processing model for large- scale data-intensive applications like data mining and web indexing. Hadoop, an open-source implementation of MapReduce, is widely applied to support cluster computing jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most map tasks can quickly access their local data. Network delays due to data movement during running time have been ignored in the recent Hadoop research. Unfortunately, both the homogeneity and data locality assumptions in Hadoop are optimistic at best and unachievable at worst, potentially introducing performance problems in virtualized data centers. We show in this dissertation that ignoring the data-locality issue in heterogeneous cluster computing environments can noticeably reduce the performance of Hadoop. Without considering the network delays, the performance of Hadoop clusters would be significatly downgraded. In this dissertation, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Apart from the data placement issue, we also design a prefetching and predictive scheduling mechanism to help Hadoop in loading data from local or remote disks into main memory. To avoid network congestions, we propose a preshuffling algorithm to preprocess intermediate data between the map and reduce stages, thereby increasing the throughput of Hadoop clusters. Given a data-intensive application running on a Hadoop cluster, our data placement, prefetching, and preshuffling schemes adaptively balance the tasks and amount of data to achieve improved data-processing performance. Experimental results on real data-intensive applications show that our design can noticeably improve the performance of Hadoop clusters. In summary, this dissertation describes three practical approaches to improving the performance of Hadoop clusters, and explores the idea of integrating prefetching and preshuffling in the native Hadoop system.
Severalnines Self-Training: MySQL® Cluster - Part IISeveralnines
Part II of our free self-training slides on MySQL Cluster.
In this part we cover 'Detailed Concepts':
* Data Distribution & Partitioning
* Two Phase Commit Protocol
* Transaction Resources
Conference slides: MySQL Cluster Performance TuningSeveralnines
This presentation goes through performance tuning basics in MySQL Cluster.
It also covers the new parameters and status variables of MySQL Cluster 7.2 to determine issues with e.g disk data performance and query (join) performance.
MySQL como Document Store PHP Conference 2017MySQL Brasil
Conheça uma nova forma schemaless de usar o MySQL e ganhe produtividade e flexibilidade ao trabalhar diretamente com documentos JSON, chave-valor ou híbrido NoSQL e SQL.
Entenda como Uber, Tesla Motors e Paypal usam MySQL como componente crítico da infraestrutura de dados. Falaremos de aspectos como:
- Alta Disponibilidade;
- Escalabilidade; e
- Schemaless.
Alta disponibilidade com MySQL EnterpriseMySQL Brasil
Nesta apresentação abordamos:
1. Alta disponibilidade (HA) – conceitos básicos
2. Arquiteturas e topologias de HA para MySQL
3. Monitoramento e gerenciamento
Uma visão do caminho que o MySQL está seguindo em sua evolução, apresentando funcionalidades NoSQL (Document Store), replicação ativo-ativo para InnoDB, escalabilidade horizontal de leitura e escrita, ideal para ambientes Cloud.
O MySQL agora pode ser usado como um NoSQL document store, combinando a flexibilidade do modelo de armazenamento de documentos com o poder do modelo relacional. A partir da versão 5.7 foram adicionados tipo de dados nativo JSON, colunas virtuais com indexação e muitas novas funções para manipulação de JSON. Mas agora há também um novo protocolo e API para tornar a vida do desenvolvedor ainda mais fácil. Com estas novidades o arquiteto deixará de ser forçado a escolher entre muitos trade-offs importantes quando estiver selecionando soluções NoSQL ou SQL. Nesta palestra daremos uma visão geral das novidades com alguns exemplos e casos de uso.
Enabling digital transformation with MySQLMySQL Brasil
Slides da apresentação no Oracle Open World 2016 em São Paulo.
Diversos setores da economia vêm passando por uma disruptura e estão sendo reinventados pela transformação digital. A tecnologia digital muda rapidamente e cria desafios e oportunidades sem precedentes para os executivos de TI. Nesta sessão, você entenderá por que transformação digital é o foco da agenda dos CIOs, assim como segurança, serviços na nuvem, big data e controle de custos. Saberá também como MySQL viabiliza a transformação digital, ajudando os executivos de TI a atingir seus objetivos.
Alta Disponibilidade no MySQL 5.7 para aplicações em PHPMySQL Brasil
A nova versão do MySQL traz muitas melhorias, principalmente nos recursos de alta-disponibilidade. Nesta palestra abordamos:
- opções para implementar alta disponibilidade no MySQL 5.7;
- topologias e arquiteturas de referência;
- boas práticas de monitoramento e gerenciamento.
A nova versão do MySQL traz muitas melhorias, principalmente nos recursos de alta-disponibilidade. Nesta palestra abordamos:
- opções para implementar alta disponibilidade no MySQL 5.7;
- topologias e arquiteturas de referência;
- boas práticas de gerenciamento.
Entenda como o MySQL é parte fundamental do OpenStack e perceba a excelente oportunidade de usar o MySQL como Serviço (DBaaS) numa cloud privada ou pública com API padronizada.
O MySQL é o banco de dados open source mais popular do mundo, usado em grandes websites como Facebook, Youtube, Twitter, Globo.com e também em aplicações mobile e embarcadas. Um fato que surpreende é que estes grandes websites desde seus primórdios se apoiam no MySQL como principal tecnologia de armazenamento de dados. No Vale do Silício (EUA), o MySQL continua forte e crescendo em popularidade. Nesta palestra destacaremos os principais motivos que levam as Start Ups Web a utilizar o MySQL, além de apresentar um guia prático de como começar a desenvolver com MySQL.
Novidades do MySQL para desenvolvedores ago15MySQL Brasil
Esta apresentação é sobre as principais novidades do MySQL 5.7 que os desenvolvedores precisam conhecer. Também há alguns comentários sobre MySQL Cluster 7.4, MySQL Fabric etc.
Estratégias de Segurança e Gerenciamento para MySQLMySQL Brasil
43% das empresas passaram por uma violação de dados em 2014, segundo o Ponemon Institute. Neste evento abordaremos os erros comuns que você pode estar cometendo, expondo seus dados a um risco desencessário e como minimizar brechas de segurança no MySQL. Falaremos também do ambiente ideal, altamente automatizado e gerenciado com apoio de ferramentas do MySQL Enterprise Edition.
Os engenheiros da Oracle andam ocupados: o MySQL 5.7 já está em estágio de Release Candidate e muitas novidades. Nesta apresentação abordaremos as novidades desta versão e também algumas melhorias do MySQL Cluster, detalhando os novos recursos como: interfaces NoSQL, Memcached API, JSON e HTTP, mais operações online, melhorias de desempenho no InnoDB e Otimizador, replicação multi-source entre outras.
Serviços Escaláveis e de Alta Performance com MySQL e JavaMySQL Brasil
As necessidades cada vez maiores de escalabilidade e performance nas aplicações Web e Mobile exigem novas estratégias no uso de bancos de dados, como por exemplo novos métodos de acesso NoSQL para MySQL. Tais métodos foram implementados recentemente e incluem APIs Java e Memcached que são uma alternativa de alto desempenho e escalável para consultas simples e que não requerem a definição de um esquema de dados rígido, mas também permitem aproveitar todas as vantagens já conhecidas de bancos de dados relacionais existentes.
Nesta apresentação mostraremos os novos métodos de acesso NoSQL para MySQL Server com InnoDB e MySQL Cluster e alguns casos de uso em arquiteturas Web e Mobile.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
2. Disclaimer
The preceding is intended to outline our general
product direction. It is intended for information
purposes only, and may not be incorporated into any
contract. It is not a commitment to deliver any
material, code, or functionality, and should not be
relied upon in making purchasing decisions. The
development, 2 release, and timing of any features or
functionality described for Oracle’s products remains
at the sole discretion of Oracle.
2
4. Agenda
• MySQL Cluster Product Overview
– O que é o MySQL Cluster?
– Componentes do MySQL Cluster
– MySQL Cluster Manager ™
– Casos de Utilização
– Benchmarks
• MySQL Cluster 7.2
4
7. Multi-Data Center Scalability
Geographic Replication
• Replicate complete
clusters across data
centers
– DR & data locality
– Fully active/active
Geographic – No passive resources
Replication
• Split individual clusters
across data centers
– Synchronous replication
& auto-failover between
sites
– Delivered as part of
MySQL Cluster 7.2 DMR
7
8. Mapping Applications to HA Technology
Shared-Nothing,
Database Clustered /
Applica ons Geo-Replicated
Replica on Virtualized
Cluster
E-Commerce / Trading (1)
Session Management (1)
User Authen ca on / Accoun ng (1)
Feeds, Blogs, Wikis
OLTP (1)
Data Warehouse/BI
Content Management
CRM
Collabora on
Packaged So ware
Network Infrastructure
Core Telco Apps (HLR/HSS/SDP…)
1: Replication used in combination with cluster or virtualization – based HA
8
9. MySQL Cluster
• O MySQL Cluster é formado por 3 componentes:
– Management Node: permite a realização de tarefas
administrativas como monitoramento dos nós, backup dos
nós de dados do cluster e outras – seu binário é o ndb_mgmd;
– Data ou Storage Node: responsável por processar e
armazenar dados dos bancos de dados localizados no cluster
– seu binário é o ndbd ou ndbmtd;
– API ou SQL Node: este é o nó que recebe as conexões das
aplicações e enviam e requisitam dados armazenados nos
Data Nodes – seu binário é o mysqld;
9
10. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
P1
Data Node 2
P2
P3 Data Node 3
P4
Data Node 4
10
11. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1
P1
Data Node 2
P2
P3 Data Node 3
P4
Data Node 4
11
12. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1
P1
Data Node 2
F1
P2
P3 Data Node 3
P4
Data Node 4
12
13. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
P4
Data Node 4
13
14. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
P4
Data Node 4
14
15. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2
P4
Data Node 4
15
16. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2
P4
Data Node 4
F2
16
17. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2
P4
Data Node 4
F4 F2
17
18. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2 F4
P4
Data Node 4
F4 F2
18
19. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1 Node Group 1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2 F4
P4
Data Node 4
F4 F2
19
20. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1 Node Group 1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2 F4
P4
Node Group 2
Data Node 4
F4 F2
20
21. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1 Node Group 1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2 F4
P4
Node Group 2
Data Node 4
F4 F2
21
22. MySQL Cluster - Auto-Partitioning
Table T1 Data Node 1
F1 F3
P1 Node Group 1
Data Node 2
F3 F1
P2
P3 Data Node 3
F2 F4
P4
Node Group 2
Data Node 4
F4 F2
22
23. MySQL Cluster - Auto-Partitioning
Table T1
Scalability a
P1 Performanc
e
P2
HA a
P3 Ease of use
P4 SQL/Joins a
ACID a
Transaction
s
23
25. MySQL Cluster
• Recomenda-se que:
– todos os componentes sejam pelo menos duplicados, tendo
uma instalação com no mínimo 6 nodes dentro do cluster;
– o cluster seja colocado em uma sub-rede que possibilite
trafegar dados somente do cluster para que não haja perda
de pacotes;
– todas as máquinas que figuram SQL e Storage node tenham
as mesmas configurações para evitar bottlenecks;
– todos os binários de todos os componentes sejam da mesma
versão e release do produto;
25
26. Comparison
MySQL Oracle VM Solaris MySQL
HA Technology WSFC*
Replication Template Cluster Cluster
All supported by Windows Server Oracle Linux Oracle Solaris All supported by
Platform Support MySQL Server ** 2008 MySQL Cluster ****
All (InnoDB InnoDB InnoDB All (InnoDB NDB (MySQL
Supported Storage Engine recommended) recommended) Cluster)
Auto IP Failover No Yes Yes Yes Yes
Auto Database Failover No Yes Yes Yes Yes
Auto Data No N/A – Shared N/A – Shared N/A – Shared Yes
Storage Storage Storage
Resynchronization
Failover Time User / Script 5 seconds + 5 seconds + 5 seconds + 1 Second or Less
Dependent InnoDB Recovery InnoDB Recovery InnoDB Recovery
Time*** Time*** Time***
Asynchronous / Semi- N/A – Shared N/A – Shared N/A – Shared Synchronous
Replication Mode Synchronous Storage Storage Storage
No, distributed across Yes Yes Yes No, distributed
Shared Storage nodes across nodes
Master & Multiple Active / Passive Active / Passive Active / Passive 255 + Multiple
No. of Nodes Slaves Master + Multiple Master + Multiple Master + Multiple Slaves
Slaves Slaves Slaves
Availability Design Level 99.9% 99.95% 99.99% 99.99% 99.999%
* Windows Server 2008R2 Failover Clustering
** http://www.mysql.com/support/supportedplatforms/database.html
*** InnoDB recovery time dependent on cache and database size, database activity, etc.
**** http://www.mysql.com/support/supportedplatforms/cluster.html
26
27. MySQL Cluster Manager ™
• Funciona através do
MySQL Enterprise
Monitor;
• Permite fazer start,
restart e stop de
Storage Nodes
através de Interface
Gráfica;
• Live Demo:
http://bit.ly/rqjQRp
27
28. MySQL Cluster Manager ™
MySQL Cluster nodes automatically restarted
after configuration change
28
29. Benchmarks – Scale-Out
Aumento de servidores faz que haja aumento na escala,
aumentando a capacidade de resolução de requisições!
29
30. MySQL Cluster Architecture
REST LDAP
Application Nodes Scalability
Performanc
e
Node Group 1 Node Group 2
HA
F1 F2 Ease of use
Node 1
Cluster Node 3 Cluster
Mgr Mgr
F3 F4
SQL/Joins a
F3 F4 ACID a
Node 2
Node 4
F1 F2 Transaction
Data s
Nodes
30
31. MySQL Cluster Architecture
REST LDAP
Application Nodes Scalability
Performanc
e
Node Group 1 Node Group 2
HA a
F1 F2 Ease of use
Node 1
Cluster Node 3 Cluster
Mgr Mgr
F3 F4
SQL/Joins a
F3 F4 ACID a
Node 2
Node 4
F1 F2 Transaction
Data s
Nodes
31
32. Wagner Bianchi
É especialista em MySQL e outros servidores de bancos de dados
relacionais como Oracle e SQL Server. Formado em Gerenciamento de
Bancos de Dados, com MBA em Administração de Empresas pela
Fundação Getúlio Vargas e Pós-Graduando em Bancos de Dados pela
Universidade Gama Filho do Distrito Federal, possui várias
certificações, entre elas a SCMA, SCMDEV, SCMDBA e SCMCDBA.
Atualmente é Consultor Sênior em bancos de dados pela
WAGNERBIANCHI.COM.
32