As part of OTN Tour 2014 believes this presentation which is intented for covers the basic explanation of a solution of IMDG, explains how it works and how it can be used within an architecture and shows some use cases. Enjoy
Virtualizing Latency Sensitive Workloads and vFabric GemFireCarter Shanklin
This presentation was made by Emad Benjamin of VMware Technical Marketing. Normally I wouldn't upload someone else's preso but I really insisted this get posted and he asked me to help him out.
This deck covers tips and best practices for virtualizing latency sensitive apps on vSphere in general, and takes a deep dive into virtualizing vFabric GemFire, which is a high-performance distributed and memory-optimized key/value store.
Best practices include how to configure the virtual machines and how to tune them appropriately to the hardware the application runs on.
Scale Out Your Big Data Apps: The Latest on Pivotal GemFire and GemFire XDVMware Tanzu
Companies across all industries and sizes are investing in strategic custom applications to enhance their competitive advantages. Developing these applications requires continuous improvement, based on insights gleaned from collecting and analyzing the data that they generate.
Big Data for high-performing, scalable and reliable applications requires a new set of tools and technologies. Pivotal GemFire is a distributed in-memory NoSQL data management solution for creating high-scale custom applications. Pivotal GemFire XD supports structured data as part the industry’s first Hadoop-based platform for creating closed loop analytics solutions – enabling businesses to continuously optimize real-time automation in their applications.
An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
Virtualizing Latency Sensitive Workloads and vFabric GemFireCarter Shanklin
This presentation was made by Emad Benjamin of VMware Technical Marketing. Normally I wouldn't upload someone else's preso but I really insisted this get posted and he asked me to help him out.
This deck covers tips and best practices for virtualizing latency sensitive apps on vSphere in general, and takes a deep dive into virtualizing vFabric GemFire, which is a high-performance distributed and memory-optimized key/value store.
Best practices include how to configure the virtual machines and how to tune them appropriately to the hardware the application runs on.
Scale Out Your Big Data Apps: The Latest on Pivotal GemFire and GemFire XDVMware Tanzu
Companies across all industries and sizes are investing in strategic custom applications to enhance their competitive advantages. Developing these applications requires continuous improvement, based on insights gleaned from collecting and analyzing the data that they generate.
Big Data for high-performing, scalable and reliable applications requires a new set of tools and technologies. Pivotal GemFire is a distributed in-memory NoSQL data management solution for creating high-scale custom applications. Pivotal GemFire XD supports structured data as part the industry’s first Hadoop-based platform for creating closed loop analytics solutions – enabling businesses to continuously optimize real-time automation in their applications.
An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
In April 2015, Apache Geode (incubating) was born from Pivotal’s GemFire, the distributed in-memory database. However, the donation of over 1M LOC was just the beginning of the journey. In this talk we discuss how the GemFire engineering team has adapted their development infrastructure, processes, and culture to embrace the “Apache Way". We present lessons learned and best practices for new and incubating open source projects in areas of initial code submission, IP clearance, governance policies, code review, and community building. We discuss the challenges the team faced and how we changed internal communication and software design processes to a community-driven model. In particular, we highlight effective strategies for growing a project community and embracing new members. Finally, we show how changing to the open source model has increased both productivity and quality.
The talk will be about the project to find a replacement for all IBM products in the company with the example for the databases. What was the goal of the project, the learning, a short overview about the options
we migrated about 500 db2 databases to EnterpriseDB. The database size was from a small size up to 4 TB and we implemented a completely new fully automated deployment of VM and database. Databases are now 11 month in production. The talk will have an overview of the project, the learnings, a few parameters and technical parameters that were found for stability and performance.
Best Practices & Lessons Learned from Deployment of PostgreSQLEDB
This talk will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through the stages of PostgreSQL adoption and get as much value out of their deployment as possible without incurring undue risk.
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
New enhancements for security and usability in EDB 13EDB
EDB 13 is here and it enhances our flagship database server and tools. This webinar will explore its security, usability, and portability updates. Join us to learn how EDB 13 can help you improve your PostgreSQL productivity and data protection.
Webinar highlights include:
- New security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity, and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
Hadoop World 2011: Unlocking the Value of Big Data with Oracle - Jean-Pierre ...Cloudera, Inc.
Analyzing new and diverse digital data streams can reveal new sources of economic value, provide fresh insights into customer behavior and identify market trends early on. But this influx of new data can create challenges for IT departments. To derive real business value from Big Data, you need the right tools to capture and organize a wide variety of data types from different sources, and to be able to easily analyze it within the context of all your enterprise data. Attend this session to learn how Oracle’s end-to-end value chain for Big Data can help you unlock the value of Big Data.
This webinar will give an overview of the typical use of EDB Postgres Advanced Server and EDB tools for a smart city project, developed in a city with more than 2.2 million people. The main goal of this project is to achieve a 24/7 uninterrupted service with zero data loss and without any service interruption.
During the session, we will explore the project architecture and we will discuss what specific tools were used, and how these tools help manage DBAs’ daily tasks. We will also discuss what type of data is critical for a smart city project.
The EDB Remote DBA Service allows companies to accelerate Postgres deployment, either on-prem or in the cloud, while reducing risk, saving money, and driving faster growth.
This webinar will cover all the benefits of using the Remote DBA Service, including:
- Around-the-Clock Assurance
- Comprehensive and proactive database management
- 24x7x365 monitoring and reporting
- Experienced and certified Postgres DBAs
- Cost-effective staff augmentation
Premium Database Management
- Designated Technical Lead assigned to each account
- Establishment of an HA infrastructure & DR planning
- Scalability advice & data tuning
- Capacity planning & analysis, including projections on database growth
Responsive, Affordable, and Reliable
- Ensure Postgres databases are running at peak performance, 24x7
- Cost-effective, experienced Postgres experts delivering 24x7x365 service
- Utilizing enterprise tools for monitoring, abiding by database resiliency requirements
Large Table Partitioning with PostgreSQL and DjangoEDB
With great DB Table comes great responsibility". Our email messages table was growing too much and we needed to do something about it. We will talk about how we integrated PostgreSQL Declarative partitioning with our Django based Customer Portal to solve the problem.
Overcoming write availability challenges of PostgreSQLEDB
There's no shortage of physical replication solutions for PostgreSQL, they scale horizontally and provide high read availability. But where they fall short is write availability, which leads many users to consider PostgreSQL logical replication. Existing solutions have a single point of failure or are dependent on a forked, vendor-provided PostgreSQL extension making reliable, enterprise-class logical replication hard to come by. Furthermore, these solutions put limits on scaling PostgreSQL.
By combining Kafka, an open source event streaming system with PostgreSQL, customers can get a fault tolerant, scalable logical replication service. Learn how EDB Replicate leverages Kafka for high write availability needed for today's demanding consumers who expect their applications to be always available and won't tolerate latency.
Using PEM to understand and improve performance in Postgres: Postgres Tuning ...EDB
The Postgres Enterprise Manager (PEM) Tuning Wizard reviews your installation, and recommends a set of configuration options that will help tune a Postgres installation to best suit the anticipated workload. PEM's Performance Diagnostics uses Postgres' wait state information to analyze queries in context of the current workload and help identify further performance improvement opportunities in terms of locks, IO, and CPU bottlenecks.
This webinar will explore:
How to intelligently manage all your database servers with a single console
Useful features and functionality needed for visual database administration
Managing the performance and design of your database servers
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
Making your PostgreSQL Database Highly AvailableEDB
High Availability is one of the most important requirements for mission-critical database systems. It is important for business continuity.
Enterprises cannot afford an outage of mission-critical applications, as mere minutes of downtime can cost millions of dollars in lost revenue.
Therefore making a database environment highly available is typically one of the highest priorities and poses significant challenges/questions to enterprises and database administrators.
What you will learn at this webinar:
- Database high availability basics in PostgreSQL
- How to design your environment for high availability
- High availability options available for PostgreSQL
- What EDB can offer to help enterprises meet their high availability requirements
Public Sector Virtual Town Hall: High Availability for PostgreSQLEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
High availability concepts and workings
RPO, RTO, and uptime in high availability
Postgres high availability using streaming replication and logical replication
Important high availability parameters in PostgreSQL and options to monitor high availability
EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
A webinarban megtudhatják milyen kihívásokkal kell szembenézni Oracle adatbázis PostgreSQL-re migrálása során. Bemutatjuk az utóbbi két év nagy komplexitású Oracle kompatibilitási vizsgálatainak tapasztalatait, beleértve az idei évben az EDB migrációs portálján keresztül vizsgált több mint 2 200 000 Oracle DDL konstrukciót.
Az előadás alatt kitérünk az alábbiakra:
- Tárhely (storage) definiciók
- Csomagok
- Tárolt eljárások
- PL/SQL kód
- Gyártói adatbázis API-k
- Komplex adatbázis migrációk
Előadásunkat olyan migrációs eszközök bemutatásával zárjuk, amelyek jelentősen leegyszerűsítik az Oracle-PostgeSQL migrációt és csökkentik annak kockázatait.
Un guide complet pour la migration de bases de données héritées vers PostgreSQLEDB
Ce webinaire passera en revue les défis auxquels les équipes sont confrontées lors de la migration d’une base de données Oracle vers PostgreSQL Nous partagerons les informations tirées d’évaluations de compatibilité Oracle de grande ampleur, effectuées sur les deux dernières années, inclues plus de 2 200 000 constructions Oracle DDL qui ont été évaluées au travers du portail de migration EDB en 2020.
Lors de cette session, nous aborderons:
Définition du stockage
Outils
Procédures stockées
Code PL/SQL
API de la base de données propriétaire
Migration de données à grande échelle
Nous terminerons cette session par une démonstration d’outils de migration qui simplifient considérablement et aident à réduire les risques de la migration d’une base de données Oracle vers PostgreSQL.
Contexte:
Un webinaire moyennement technique concentré sur la façon de quitter Oracle, quels sont les pièges à éviter et comment les aborder.
Public:
Les chefs d’entreprise et architectes qui souhaitent évaluer la faisabilité de la sortie d’une des bases de données héritée les plus utilisées mais aussi les plus détestées.
It's harder than ever to predict the load your application will need to handle in advance, so how do you design your architecture so you can afford to implement as you go and be ready for whatever comes your way. It's easy to focus on optimizing each part of your application but your application architecture determines the options you have to make big leaps in scalability. In this talk we'll cover practical patterns you can build today to meet the needs of rapid development while still creating systems that can scale up and out. Specific code examples will focus on .NET but the principles apply across many technologies. Real world systems will be discussed based on our experience helping customers around the world optimize their enterprise applications.
SpringPeople - Introduction to Cloud ComputingSpringPeople
Cloud computing is no longer a fad that is going around. It is for real and is perhaps the most talked about subject. Various players in the cloud eco-system have provided a definition that is closely aligned to their sweet spot –let it be infrastructure, platforms or applications.
This presentation will provide an exposure of a variety of cloud computing techniques, architecture, technology options to the participants and in general will familiarize cloud fundamentals in a holistic manner spanning all dimensions such as cost, operations, technology etc
In April 2015, Apache Geode (incubating) was born from Pivotal’s GemFire, the distributed in-memory database. However, the donation of over 1M LOC was just the beginning of the journey. In this talk we discuss how the GemFire engineering team has adapted their development infrastructure, processes, and culture to embrace the “Apache Way". We present lessons learned and best practices for new and incubating open source projects in areas of initial code submission, IP clearance, governance policies, code review, and community building. We discuss the challenges the team faced and how we changed internal communication and software design processes to a community-driven model. In particular, we highlight effective strategies for growing a project community and embracing new members. Finally, we show how changing to the open source model has increased both productivity and quality.
The talk will be about the project to find a replacement for all IBM products in the company with the example for the databases. What was the goal of the project, the learning, a short overview about the options
we migrated about 500 db2 databases to EnterpriseDB. The database size was from a small size up to 4 TB and we implemented a completely new fully automated deployment of VM and database. Databases are now 11 month in production. The talk will have an overview of the project, the learnings, a few parameters and technical parameters that were found for stability and performance.
Best Practices & Lessons Learned from Deployment of PostgreSQLEDB
This talk will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through the stages of PostgreSQL adoption and get as much value out of their deployment as possible without incurring undue risk.
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
New enhancements for security and usability in EDB 13EDB
EDB 13 is here and it enhances our flagship database server and tools. This webinar will explore its security, usability, and portability updates. Join us to learn how EDB 13 can help you improve your PostgreSQL productivity and data protection.
Webinar highlights include:
- New security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity, and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
Hadoop World 2011: Unlocking the Value of Big Data with Oracle - Jean-Pierre ...Cloudera, Inc.
Analyzing new and diverse digital data streams can reveal new sources of economic value, provide fresh insights into customer behavior and identify market trends early on. But this influx of new data can create challenges for IT departments. To derive real business value from Big Data, you need the right tools to capture and organize a wide variety of data types from different sources, and to be able to easily analyze it within the context of all your enterprise data. Attend this session to learn how Oracle’s end-to-end value chain for Big Data can help you unlock the value of Big Data.
This webinar will give an overview of the typical use of EDB Postgres Advanced Server and EDB tools for a smart city project, developed in a city with more than 2.2 million people. The main goal of this project is to achieve a 24/7 uninterrupted service with zero data loss and without any service interruption.
During the session, we will explore the project architecture and we will discuss what specific tools were used, and how these tools help manage DBAs’ daily tasks. We will also discuss what type of data is critical for a smart city project.
The EDB Remote DBA Service allows companies to accelerate Postgres deployment, either on-prem or in the cloud, while reducing risk, saving money, and driving faster growth.
This webinar will cover all the benefits of using the Remote DBA Service, including:
- Around-the-Clock Assurance
- Comprehensive and proactive database management
- 24x7x365 monitoring and reporting
- Experienced and certified Postgres DBAs
- Cost-effective staff augmentation
Premium Database Management
- Designated Technical Lead assigned to each account
- Establishment of an HA infrastructure & DR planning
- Scalability advice & data tuning
- Capacity planning & analysis, including projections on database growth
Responsive, Affordable, and Reliable
- Ensure Postgres databases are running at peak performance, 24x7
- Cost-effective, experienced Postgres experts delivering 24x7x365 service
- Utilizing enterprise tools for monitoring, abiding by database resiliency requirements
Large Table Partitioning with PostgreSQL and DjangoEDB
With great DB Table comes great responsibility". Our email messages table was growing too much and we needed to do something about it. We will talk about how we integrated PostgreSQL Declarative partitioning with our Django based Customer Portal to solve the problem.
Overcoming write availability challenges of PostgreSQLEDB
There's no shortage of physical replication solutions for PostgreSQL, they scale horizontally and provide high read availability. But where they fall short is write availability, which leads many users to consider PostgreSQL logical replication. Existing solutions have a single point of failure or are dependent on a forked, vendor-provided PostgreSQL extension making reliable, enterprise-class logical replication hard to come by. Furthermore, these solutions put limits on scaling PostgreSQL.
By combining Kafka, an open source event streaming system with PostgreSQL, customers can get a fault tolerant, scalable logical replication service. Learn how EDB Replicate leverages Kafka for high write availability needed for today's demanding consumers who expect their applications to be always available and won't tolerate latency.
Using PEM to understand and improve performance in Postgres: Postgres Tuning ...EDB
The Postgres Enterprise Manager (PEM) Tuning Wizard reviews your installation, and recommends a set of configuration options that will help tune a Postgres installation to best suit the anticipated workload. PEM's Performance Diagnostics uses Postgres' wait state information to analyze queries in context of the current workload and help identify further performance improvement opportunities in terms of locks, IO, and CPU bottlenecks.
This webinar will explore:
How to intelligently manage all your database servers with a single console
Useful features and functionality needed for visual database administration
Managing the performance and design of your database servers
Apache Geode (incubating) is the core of Pivotal Gemfire now available as an open source project governed by Apache Software Foundation Incubator. The legacy of Pivotal Gemfire and the ASF community uniquely position Geode as a secret ingredient for modern-day data management architectures.
These types of architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.
Apache Geode (incubating) has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding!) in this area of big data ecosystem is palpable, and the ASF is the place where the innovation is happening. Come to this session to understand: a brief history of Geode, architecture and use cases, design philosophy and principles, but most importantly: how you too can participate in the in-memory data center revolution.
Making your PostgreSQL Database Highly AvailableEDB
High Availability is one of the most important requirements for mission-critical database systems. It is important for business continuity.
Enterprises cannot afford an outage of mission-critical applications, as mere minutes of downtime can cost millions of dollars in lost revenue.
Therefore making a database environment highly available is typically one of the highest priorities and poses significant challenges/questions to enterprises and database administrators.
What you will learn at this webinar:
- Database high availability basics in PostgreSQL
- How to design your environment for high availability
- High availability options available for PostgreSQL
- What EDB can offer to help enterprises meet their high availability requirements
Public Sector Virtual Town Hall: High Availability for PostgreSQLEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
High availability concepts and workings
RPO, RTO, and uptime in high availability
Postgres high availability using streaming replication and logical replication
Important high availability parameters in PostgreSQL and options to monitor high availability
EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
A webinarban megtudhatják milyen kihívásokkal kell szembenézni Oracle adatbázis PostgreSQL-re migrálása során. Bemutatjuk az utóbbi két év nagy komplexitású Oracle kompatibilitási vizsgálatainak tapasztalatait, beleértve az idei évben az EDB migrációs portálján keresztül vizsgált több mint 2 200 000 Oracle DDL konstrukciót.
Az előadás alatt kitérünk az alábbiakra:
- Tárhely (storage) definiciók
- Csomagok
- Tárolt eljárások
- PL/SQL kód
- Gyártói adatbázis API-k
- Komplex adatbázis migrációk
Előadásunkat olyan migrációs eszközök bemutatásával zárjuk, amelyek jelentősen leegyszerűsítik az Oracle-PostgeSQL migrációt és csökkentik annak kockázatait.
Un guide complet pour la migration de bases de données héritées vers PostgreSQLEDB
Ce webinaire passera en revue les défis auxquels les équipes sont confrontées lors de la migration d’une base de données Oracle vers PostgreSQL Nous partagerons les informations tirées d’évaluations de compatibilité Oracle de grande ampleur, effectuées sur les deux dernières années, inclues plus de 2 200 000 constructions Oracle DDL qui ont été évaluées au travers du portail de migration EDB en 2020.
Lors de cette session, nous aborderons:
Définition du stockage
Outils
Procédures stockées
Code PL/SQL
API de la base de données propriétaire
Migration de données à grande échelle
Nous terminerons cette session par une démonstration d’outils de migration qui simplifient considérablement et aident à réduire les risques de la migration d’une base de données Oracle vers PostgreSQL.
Contexte:
Un webinaire moyennement technique concentré sur la façon de quitter Oracle, quels sont les pièges à éviter et comment les aborder.
Public:
Les chefs d’entreprise et architectes qui souhaitent évaluer la faisabilité de la sortie d’une des bases de données héritée les plus utilisées mais aussi les plus détestées.
It's harder than ever to predict the load your application will need to handle in advance, so how do you design your architecture so you can afford to implement as you go and be ready for whatever comes your way. It's easy to focus on optimizing each part of your application but your application architecture determines the options you have to make big leaps in scalability. In this talk we'll cover practical patterns you can build today to meet the needs of rapid development while still creating systems that can scale up and out. Specific code examples will focus on .NET but the principles apply across many technologies. Real world systems will be discussed based on our experience helping customers around the world optimize their enterprise applications.
SpringPeople - Introduction to Cloud ComputingSpringPeople
Cloud computing is no longer a fad that is going around. It is for real and is perhaps the most talked about subject. Various players in the cloud eco-system have provided a definition that is closely aligned to their sweet spot –let it be infrastructure, platforms or applications.
This presentation will provide an exposure of a variety of cloud computing techniques, architecture, technology options to the participants and in general will familiarize cloud fundamentals in a holistic manner spanning all dimensions such as cost, operations, technology etc
Microservices - opportunities, dilemmas and problemsŁukasz Sowa
Presentation from Warsjawa 2014 workshop "Microservices in Scala". Topics covered:
- What are microservices?
- What's the difference between them vs monolithic
architectures?
- What are the different flavours of microservices?
What is Big Data and why it is required and needed for the organization those who really need and generating huge amount of data and when it will be use
Data Lake and the rise of the microservicesBigstep
By simply looking at structured and unstructured data, Data Lakes enable companies to understand correlations between existing and new external data - such as social media - in ways traditional Business Intelligence tools cannot.
For this you need to find out the most efficient way to store and access structured or unstructured petabyte-sized data across your entire infrastructure.
In this meetup we’ll give answers on the next questions:
1. Why would someone use a Data Lake?
2. Is it hard to build a Data Lake?
3. What are the main features that a Data Lake should bring in?
4. What’s the role of the microservices in the big data world?
Mtc learnings from isv & enterprise interactionGovind Kanshi
This is one of the dated presentation for which I keep getting requests for, please do reach out to me for status on various things as Azure keeps fixing/innovating whole of things every day.
There are bunch of other things I can help you on to ensure you can take advantage of Azure platform for oss, .net frameworks and databases.
Mtc learnings from isv & enterprise (dated - Dec -2014)Govind Kanshi
This is little dated deck for our learnings - I keep getting multiple requests for it. I have removed one slide for access permissions (RBAC -which are now available).
Introduction to Big Data and NoSQL.
This presentation was given to the Master DBA course at John Bryce Education in Israel.
Work is based on presentations by Michael Naumov, Baruch Osoveskiy, Bill Graham and Ronen Fidel.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
2. This
Presentation
• Covers the basic
explanation of a
solution of IMDG
• Explains how it works
and how it can be
used within an
architecture
• Shows some use cases
• Si, esta toda en ingles
7. Just a few
years ago
• Storing
– Oracle
– IBM DB2
– IBM Informix
– SAP Sybase
– MySQL
– PosgreSQL
– SQL Server
– Lotus Notes
• Cache
– Memcached
– EhCache
– New Generation
• GridGain
• Tangosol (aka Coherence)
• Terracota
11. In-
Memory
Data Grid
(IMDG)
Is a data management
software, its data
structure resides
entirely in RAM and is
distributed among
multiple servers
Handle Big Data’s “big-
three V’s”: velocity,
Variability, Volume
14. Resilience
Nodes can fail
randomly without
data loss while
minimizing
performance impact
to running
applications
(non-disruptive
automated detection and
recovery )
15. Programming
Model
A way for developers
to easily program the
cluster of machines as
if it were a single
machine
21. Brief History
Cache
In process caching
of Key->Value data
structure
Distribute
Cache
Partitioned cache
nodes
IMDG
Partitioned system
of record
IMDG.next()
26. Clustered Caching Explained
Partitioned, Fault Tolerant, Self-Healing Cache
• Cluster of nodes holding % of primary data locally
• Back-up of primary data is distributed across all other nodes
• Logical view of all data from any node
• All nodes verify health of each other
• In the event a node is unhealthy, other nodes diagnose state
• Unhealthy node isolated from cluster
• Remaining nodes redistribute primary and back-up responsibilities to
healthy nodes
?
27. Caching
Patterns
Cache Aside -
Developer
manages cache
• Check the cache before reading from
data source
• Put data into cache after reading from
data source
• Evict or update cache when updating
data source
Cache
DAO
30. Sharding
Unlimited Data
and Processing
Capacity
• Data is load balanced across the data grid.
• Data and processing capacity scales
linearly.
• Ownership responsibilities also
partitioned.
• Access and update latency are constant.
• Best for large sets of frequently updated
data.
In-Memory Data Grid
Data
Applications
Process Process Process Process
Virtual Load Balancing
31. Fault
Tolerance
High
Availability
• Automatic fault tolerance management.
• Backups stored on separate machine.
• Even distribution of backup
responsibilities.
• Configurable number of backup copies.
• Once-and-only-once processing
guarantees.
In-Memory Data Grid
Data
Applications
Process Process Process Process
Virtual Load Balancing
Fault Tolerance Management
32. Replicated
Caching
Rapid Access to
Reference Data
• Entire data set is replicated.
• Data is stored in application ready
format.
• Data access is immediate.
• Updates are replicated across the
data grid.
• Best for small sets of static data
In-Memory Data Grid
Process Process Process Process
Replication
Data
Applications
33. Near
Caching
Rapid Data
Access
• Blend of replicated and partitioned
topologies.
• Recently used data is stored locally.
• Repetitive data access is local and
immediate.
• Automatically populated upon data access.
• Automatic invalidation of updated data.
• Scale tiers independently.
In-Memory Data Grid
Data
Process Process Process Process
Virtual Load Balancing
Application Application Application
34. Parallel
Processing
Querying,
Processing,
Aggregating in
the Data Grid
• Send the processing to where the data lives.
• Processing performed in parallel across the grid.
– Query the Data Grid
– Continuous Query Cache
– Parallel Processing on the Data Grid
– Map/Reduce Aggregation
• Once-and-only-once guarantees.
• Processing scales with the grid.
In-Memory Data Grid
Process Process Process Process
Application
Processing
Unit
35. Event
Notifications
Event Driven
Architectures
• Grid based event notification
– Java Bean Model, key and filter based events
• “Live Objects”
– Objects can respond to own state changes
– State always recoverable
– Build complex Staged Event Driven
Architectures
In-Memory Data Grid
Process Process Process Process
Application Application Application
42. WebLogic and Coherence Integration
Built in Out of the Box
• Administration, operations and management built into WebLogic
• Declarative scale out session management
• Cache data access with synch/asynch read /write through
• Analytics, events and compute
Coherence
WebLogic
Coherence
WebLogic
Coherence
Coherence Coherence
Coherence
Coherence
WebLogic
Coherence
WebLogic
Coherence Coherence
Coherence
WebLogic
Coherence
WebLogic
Coherence
Coherence Coherence
Coherence
Data Cache Data Cache Query/Event
Query/Event Query/Event
Query/Event
Declarative Session
Management
Persistence Caching with
Read and Write Through
Query, compute and
event
52. Wasup?
• IMDG is an IT solution
• It can be seen as a elastic
service
• Deployed Stand-Alone or
Embed
• Provides build in:
– Distributed Data Cache
– Process Comunication
(Queue/Topic)
– Process Sync (Locks)
– Process Executions (Querys,
MPP, Map/Reduce)
• Increase Performance and
Scalability , Reduce
bottleneks
54. Acknowledgment
• Uri Cohen - In-Memory Data Grids,
Demystified
• Oracle Coherence - Coherence-
cnt1983393.pdf
• Hazelcast - Home Site
• GridGain - Home Site