WSO2 API Manager Reference Architecture for Pivotal Cloud FoundryImesh Gunaratne
This presentation includes an introduction to Pivotal Cloud Foundry (PCF) and How WSO2 API Manager can be deployed on PCF using a PCF Tile, BOSH release and a Service Broker.
Presentation delivered by James MacNee, Systems Development Team Leader, West College Scotland, at the Scottish Moodle User Group (SMUG) meeting on the 2nd of August, 2018.
React is a JavaScript library for building user interfaces that allows developers to create reusable UI components in a composable and encapsulated way. It helps developers create responsive and adaptive UIs that are easy to structure, test, and maintain by reducing entanglement between components. The document discusses React's advantages and challenges, including creating asynchronous and distributed applications with reusable and tested JavaScript code.
#ApacheKafkaTLV: Building distributed, fault-tolerant processing apps with Kafka Streams - use case
The second part of the #2 Meetup, delivered by Anatoly Tichonov - Mentory. Hosted by WeWork Sarona TLV,
Embracing DevOps through database migrations with FlywayRed Gate Software
"Evolutionary Database Design" is the best phrase to describe database migrations. But what do we know about database migrations using PostgreSQL containers?
This session will provide you with answers and guidelines to get you started with Database DevOps practices for your organization. You will learn the aspects, methods, and strategies to build and manage your database deployments through CI/CD pipelines with open source tools like Flyway, Jenkins, and Kubernetes.
You will be able to build your first database migration through a CI/CD pipeline at the end of this session.
EUGM 2013 - Eufrozina Hoffmann (ChemAxon): Marvin extending the scope of usab...ChemAxon
The Marvin package consists of applications for drawing and visualization of chemical structures and reactions. We will focus on the main features of MarvinSketch version 6.0 giving special attention to structure drawing, displaying and usability improvements. To finish we will be looking at the new member of the package: Marvin for JavaScript, which we will introduce as an easily integrable web component for 2D sketching and basic rendering.
Streams provide a flexible branching workflow that enforces best practices. It intelligently organizes code modules and branching policies. Streams ensure changes flow correctly and simplify common processes like merging. They increase agility and scalability. Required components are a 2011.1 Perforce server and P4V or P4 client. Streams will be available in August 2011 as part of beta releases.
WSO2 API Manager Reference Architecture for Pivotal Cloud FoundryImesh Gunaratne
This presentation includes an introduction to Pivotal Cloud Foundry (PCF) and How WSO2 API Manager can be deployed on PCF using a PCF Tile, BOSH release and a Service Broker.
Presentation delivered by James MacNee, Systems Development Team Leader, West College Scotland, at the Scottish Moodle User Group (SMUG) meeting on the 2nd of August, 2018.
React is a JavaScript library for building user interfaces that allows developers to create reusable UI components in a composable and encapsulated way. It helps developers create responsive and adaptive UIs that are easy to structure, test, and maintain by reducing entanglement between components. The document discusses React's advantages and challenges, including creating asynchronous and distributed applications with reusable and tested JavaScript code.
#ApacheKafkaTLV: Building distributed, fault-tolerant processing apps with Kafka Streams - use case
The second part of the #2 Meetup, delivered by Anatoly Tichonov - Mentory. Hosted by WeWork Sarona TLV,
Embracing DevOps through database migrations with FlywayRed Gate Software
"Evolutionary Database Design" is the best phrase to describe database migrations. But what do we know about database migrations using PostgreSQL containers?
This session will provide you with answers and guidelines to get you started with Database DevOps practices for your organization. You will learn the aspects, methods, and strategies to build and manage your database deployments through CI/CD pipelines with open source tools like Flyway, Jenkins, and Kubernetes.
You will be able to build your first database migration through a CI/CD pipeline at the end of this session.
EUGM 2013 - Eufrozina Hoffmann (ChemAxon): Marvin extending the scope of usab...ChemAxon
The Marvin package consists of applications for drawing and visualization of chemical structures and reactions. We will focus on the main features of MarvinSketch version 6.0 giving special attention to structure drawing, displaying and usability improvements. To finish we will be looking at the new member of the package: Marvin for JavaScript, which we will introduce as an easily integrable web component for 2D sketching and basic rendering.
Streams provide a flexible branching workflow that enforces best practices. It intelligently organizes code modules and branching policies. Streams ensure changes flow correctly and simplify common processes like merging. They increase agility and scalability. Required components are a 2011.1 Perforce server and P4V or P4 client. Streams will be available in August 2011 as part of beta releases.
This document discusses Microsoft's AppFabric distributed caching technology. It provides an overview of AppFabric, why distributed caching is useful, how to configure AppFabric clients and servers, and how to manage data in an AppFabric cache, including concurrency and high availability. While version 1 has some limitations, it is suitable as a session state provider, and the author expects version 2 to improve the product.
The document discusses migrating a current transcription project to the Scripto/Omeka platform to enhance functionality and streamline workflows. It describes setting up the platform with a LAMP server and necessary software. Challenges include tracking progress, exporting collections from another system into the new transcription system, and designing the transcription interface. Next steps include separating functionality from the Omeka core, improving progress tracking, documentation, data export, and other tasks.
Data Con LA 2019 - Data warehouse and Kubernetes: Lessons from ClickHouse Ope...Data Con LA
Kubernetes Operators allow you to create custom resources in Kubernetes. They are popular for managing databases, which tend to be complex to manage. Our team built an operator to stand up ClickHouse, a popular open source data warehouse, in Kubernetes clusters. We'll share major learnings from this experience which we feel are applicable generally to running scalable, high performance databases in this environment. The talk starts with a level-set of Kubernetes, ClickHouse, and what an operator does. We'll then jump into the design of the ClickHouse operator example, covering challenges associated with the following problems:* Reducing the complexity of Kubernetes through definition of new resources for databases* Defining and managing storage* Performance, including comparative results which look pretty good* Monitoring* Upgrade and configuration changesKubernetes is not free from challenges, and we'll cover these as we touch on each point above. We'll conclude with a summary of reasons that we think Kubernetes is a great environment for data warehouses, based on our experience to date.
Develop, Test and Deploy your SOA Application through a Single PlatformWSO2
WSO2 Carbon Studio is a development tool that allows users to develop, test, and deploy SOA applications on the WSO2 Carbon platform through a single integrated environment. The Carbon platform provides capabilities like service hosting, message mediation, data access, security, and process orchestration. Carbon Studio supports developing various service types, mediation configurations, data services, registry resources, and business processes. It allows deploying applications to local and remote Carbon servers for debugging and testing purposes. Carbon Studio aims to provide a one stop solution for the entire SOA application lifecycle.
Introduction to Ruby Native Extensions and Foreign Function InterfaceOleksii Sukhovii
Native extensions allow Ruby code to directly interface with external C libraries for improved performance. They are C code compiled as Ruby gems that convert between Ruby and C data types. While faster, native extensions require C expertise and careful memory management. Alternatives like Ruby Inline, FFI and Fiddle provide safer interfaces but introduce overhead. For high performance needs with minimal lines of C code, inline is best; FFI performs well and is easy to use; Fiddle is simplest but slower. Native extensions remain the highest performing approach when performance is critical.
This document outlines an integration between the Contur and HEOS software. The integration is focused on allowing scientists to record experimental data in Contur and register compounds to HEOS as part of their workflow. It describes the Contur REST API and protocol execution framework that can extract and create Contur content. It also describes the HEOS SOAP API that can extract and create content in HEOS, including registering compounds. Components and protocols are provided that use these APIs to facilitate transferring data directly from Contur experiments to HEOS compound registration, without needing to re-enter information, in order to save time and reduce errors.
Empower your SharePoint sites with SPFx extensionsJoão Ferreira
The new sites and the modern experience introduced in SharePoint revolutionized the way users interact with the platform but at the same time it closed the door to all the customizations like JSLink and Custom Actions typically used to extend the default functionalities.
The time passed by and Microsoft is bringing most of the extensibility options back to the modern environment with the SharePoint Framework Extensions.
In this session, I explained all the new customization methods available, namely Application Customizers, Field Customizers, Command Sets and how they can be used to extend SharePoint functionality.
This session was given at the SharePoint Saturday Lisbon 2017
http://www.spsevents.org/city/Lisbon/Lisbon2017/speakers
(ATS6-PLAT09) Deploying Applications on load balanced AEP servers for high av...BIOVIA
This document discusses deploying Accelrys Enterprise Platform (AEP) servers in a load balanced configuration for high availability. It recommends using a staging server to test configurations before deploying to production nodes. All nodes should be configured identically and share storage. A load balancer should be configured to distribute traffic evenly across nodes. Applications need to be packaged and deployed identically to each node to ensure consistency across the load balanced farm. Load balancing improves availability, scalability and performance but requires additional infrastructure and configuration.
Introduction to SQLStreamBuilder: Rich Streaming SQL Interface for Creating a...Eventador
Discover how SQLStreamBuilder enables you to run streaming SQL against unbounded streams of data and create new, persistent streaming jobs.
https://eventador.io/sql-streambuilder/
Kafka Connect: Real-time Data Integration at Scale with Apache Kafka, Ewen Ch...confluent
Many companies are adopting Apache Kafka to power their data pipelines, including LinkedIn, Netflix, and Airbnb. Kafka’s ability to handle high throughput real-time data makes it a perfect fit for solving the data integration problem, acting as the common buffer for all your data and bridging the gap between streaming and batch systems.
However, building a data pipeline around Kafka today can be challenging because it requires combining a wide variety of tools to collect data from disparate data systems. One tool streams updates from your database to Kafka, another imports logs, and yet another exports to HDFS. As a result, building a data pipeline can take significant engineering effort and has high operational overhead because all these different tools require ongoing monitoring and maintenance. Additionally, some of the tools are simply a poor fit for the job: the fragmented nature of the data integration tools ecosystem lead to creative but misguided solutions such as misusing stream processing frameworks for data integration purposes.
We describe the design and implementation of Kafka Connect, Kafka’s new tool for scalable, fault-tolerant data import and export. First we’ll discuss some existing tools in the space and why they fall short when applied to data integration at large scale. Next, we will explore Kafka Connect’s design and how it compares to systems with similar goals, discussing key design decisions that trade off between ease of use for connector developers, operational complexity, and reuse of existing connectors. Finally, we’ll discuss how standardizing on Kafka Connect can ultimately lead to simplifying your entire data pipeline, making ETL into your data warehouse and enabling stream processing applications as simple as adding another Kafka connector.
eventbrite_kafka_summit_event_logo_v3-035858-edited.png
Rational Synergy 7.1 introduces significant performance improvements for working on a central server over a WAN. It includes a new WAN client and server architecture that communicates using HTTP/HTTPS and provides up to 20x faster performance. Additional enhancements include usability improvements, improved error reporting, support for files larger than 2GB, and a new archiver. The release aims to enable distributed teams to work globally on a single repository with reduced administration costs.
Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
Simplifying Services with the Apache Brooklyn Catalog VMware Tanzu
The document describes how the Apache Brooklyn framework can be used to simplify managing services in Cloud Foundry. It discusses the Brooklyn Service Broker and Plugin that allow Brooklyn managed applications and services to be used from Cloud Foundry. The Broker exposes the entire Brooklyn catalog as Cloud Foundry services. The Plugin provides commands to manage Brooklyn services when developing and deploying Cloud Foundry applications.
This webinar will provide an overview of new features in Monyog v7.04 including performance improvements and a complete walkthrough. It will also discuss the product roadmap which is based on popular feature requests, including plans to add support for multi-source replication, SNMP v2, query performance comparison, and integration with Slack and PagerDuty. Time will be provided at the end for questions.
This document discusses different cloud computing layers (IaaS, PaaS, SaaS) and how IBM Integration Bus can integrate with them. It describes how tools like Chef, IBM UrbanCode Deploy, and Bluemix PaaS can be used to automate deployment and management of IIB in cloud environments. The document also discusses how IIB can connect to SaaS applications and provide APIs to expose integration services as cloud applications.
Human processes and system automation work better together when integrated. Microsoft System Center Orchestrator allows users to automate IT processes across platforms through runbooks while integrating with human workflows in Service Manager. It provides templates and tools to define and monitor automated processes across infrastructure through connectors to various systems and platforms.
The BBC moved their OWLIM graph database to Amazon Web Services (AWS) to take ownership of OWLIM maintenance and support AWS adoption. They deployed OWLIM using AWS OpsWorks to define the infrastructure and Chef recipes to install and configure each layer. While OpsWorks and Chef provide benefits like simplicity and reuse, autoscaling is not supported and Chef recipes could be improved. Future plans include autoscaling, backup improvements, and refactoring Chef recipes.
Gentle Introduction to Semantic Enrichmentlogomachy
This talk is gentle introduction to the concept of semantic enrichment that demonstrates how publishers are using semantic technology such as Ontotext's GraphDB and publishing platform to make the most of their content.
This is a talk I gave at LT-Innovate Summit 2014 in Brussels. I'm talking about how publishers are leveraging language and semantics to create new products and services so that publisher can 'know what they know'.
This document discusses Microsoft's AppFabric distributed caching technology. It provides an overview of AppFabric, why distributed caching is useful, how to configure AppFabric clients and servers, and how to manage data in an AppFabric cache, including concurrency and high availability. While version 1 has some limitations, it is suitable as a session state provider, and the author expects version 2 to improve the product.
The document discusses migrating a current transcription project to the Scripto/Omeka platform to enhance functionality and streamline workflows. It describes setting up the platform with a LAMP server and necessary software. Challenges include tracking progress, exporting collections from another system into the new transcription system, and designing the transcription interface. Next steps include separating functionality from the Omeka core, improving progress tracking, documentation, data export, and other tasks.
Data Con LA 2019 - Data warehouse and Kubernetes: Lessons from ClickHouse Ope...Data Con LA
Kubernetes Operators allow you to create custom resources in Kubernetes. They are popular for managing databases, which tend to be complex to manage. Our team built an operator to stand up ClickHouse, a popular open source data warehouse, in Kubernetes clusters. We'll share major learnings from this experience which we feel are applicable generally to running scalable, high performance databases in this environment. The talk starts with a level-set of Kubernetes, ClickHouse, and what an operator does. We'll then jump into the design of the ClickHouse operator example, covering challenges associated with the following problems:* Reducing the complexity of Kubernetes through definition of new resources for databases* Defining and managing storage* Performance, including comparative results which look pretty good* Monitoring* Upgrade and configuration changesKubernetes is not free from challenges, and we'll cover these as we touch on each point above. We'll conclude with a summary of reasons that we think Kubernetes is a great environment for data warehouses, based on our experience to date.
Develop, Test and Deploy your SOA Application through a Single PlatformWSO2
WSO2 Carbon Studio is a development tool that allows users to develop, test, and deploy SOA applications on the WSO2 Carbon platform through a single integrated environment. The Carbon platform provides capabilities like service hosting, message mediation, data access, security, and process orchestration. Carbon Studio supports developing various service types, mediation configurations, data services, registry resources, and business processes. It allows deploying applications to local and remote Carbon servers for debugging and testing purposes. Carbon Studio aims to provide a one stop solution for the entire SOA application lifecycle.
Introduction to Ruby Native Extensions and Foreign Function InterfaceOleksii Sukhovii
Native extensions allow Ruby code to directly interface with external C libraries for improved performance. They are C code compiled as Ruby gems that convert between Ruby and C data types. While faster, native extensions require C expertise and careful memory management. Alternatives like Ruby Inline, FFI and Fiddle provide safer interfaces but introduce overhead. For high performance needs with minimal lines of C code, inline is best; FFI performs well and is easy to use; Fiddle is simplest but slower. Native extensions remain the highest performing approach when performance is critical.
This document outlines an integration between the Contur and HEOS software. The integration is focused on allowing scientists to record experimental data in Contur and register compounds to HEOS as part of their workflow. It describes the Contur REST API and protocol execution framework that can extract and create Contur content. It also describes the HEOS SOAP API that can extract and create content in HEOS, including registering compounds. Components and protocols are provided that use these APIs to facilitate transferring data directly from Contur experiments to HEOS compound registration, without needing to re-enter information, in order to save time and reduce errors.
Empower your SharePoint sites with SPFx extensionsJoão Ferreira
The new sites and the modern experience introduced in SharePoint revolutionized the way users interact with the platform but at the same time it closed the door to all the customizations like JSLink and Custom Actions typically used to extend the default functionalities.
The time passed by and Microsoft is bringing most of the extensibility options back to the modern environment with the SharePoint Framework Extensions.
In this session, I explained all the new customization methods available, namely Application Customizers, Field Customizers, Command Sets and how they can be used to extend SharePoint functionality.
This session was given at the SharePoint Saturday Lisbon 2017
http://www.spsevents.org/city/Lisbon/Lisbon2017/speakers
(ATS6-PLAT09) Deploying Applications on load balanced AEP servers for high av...BIOVIA
This document discusses deploying Accelrys Enterprise Platform (AEP) servers in a load balanced configuration for high availability. It recommends using a staging server to test configurations before deploying to production nodes. All nodes should be configured identically and share storage. A load balancer should be configured to distribute traffic evenly across nodes. Applications need to be packaged and deployed identically to each node to ensure consistency across the load balanced farm. Load balancing improves availability, scalability and performance but requires additional infrastructure and configuration.
Introduction to SQLStreamBuilder: Rich Streaming SQL Interface for Creating a...Eventador
Discover how SQLStreamBuilder enables you to run streaming SQL against unbounded streams of data and create new, persistent streaming jobs.
https://eventador.io/sql-streambuilder/
Kafka Connect: Real-time Data Integration at Scale with Apache Kafka, Ewen Ch...confluent
Many companies are adopting Apache Kafka to power their data pipelines, including LinkedIn, Netflix, and Airbnb. Kafka’s ability to handle high throughput real-time data makes it a perfect fit for solving the data integration problem, acting as the common buffer for all your data and bridging the gap between streaming and batch systems.
However, building a data pipeline around Kafka today can be challenging because it requires combining a wide variety of tools to collect data from disparate data systems. One tool streams updates from your database to Kafka, another imports logs, and yet another exports to HDFS. As a result, building a data pipeline can take significant engineering effort and has high operational overhead because all these different tools require ongoing monitoring and maintenance. Additionally, some of the tools are simply a poor fit for the job: the fragmented nature of the data integration tools ecosystem lead to creative but misguided solutions such as misusing stream processing frameworks for data integration purposes.
We describe the design and implementation of Kafka Connect, Kafka’s new tool for scalable, fault-tolerant data import and export. First we’ll discuss some existing tools in the space and why they fall short when applied to data integration at large scale. Next, we will explore Kafka Connect’s design and how it compares to systems with similar goals, discussing key design decisions that trade off between ease of use for connector developers, operational complexity, and reuse of existing connectors. Finally, we’ll discuss how standardizing on Kafka Connect can ultimately lead to simplifying your entire data pipeline, making ETL into your data warehouse and enabling stream processing applications as simple as adding another Kafka connector.
eventbrite_kafka_summit_event_logo_v3-035858-edited.png
Rational Synergy 7.1 introduces significant performance improvements for working on a central server over a WAN. It includes a new WAN client and server architecture that communicates using HTTP/HTTPS and provides up to 20x faster performance. Additional enhancements include usability improvements, improved error reporting, support for files larger than 2GB, and a new archiver. The release aims to enable distributed teams to work globally on a single repository with reduced administration costs.
Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
Simplifying Services with the Apache Brooklyn Catalog VMware Tanzu
The document describes how the Apache Brooklyn framework can be used to simplify managing services in Cloud Foundry. It discusses the Brooklyn Service Broker and Plugin that allow Brooklyn managed applications and services to be used from Cloud Foundry. The Broker exposes the entire Brooklyn catalog as Cloud Foundry services. The Plugin provides commands to manage Brooklyn services when developing and deploying Cloud Foundry applications.
This webinar will provide an overview of new features in Monyog v7.04 including performance improvements and a complete walkthrough. It will also discuss the product roadmap which is based on popular feature requests, including plans to add support for multi-source replication, SNMP v2, query performance comparison, and integration with Slack and PagerDuty. Time will be provided at the end for questions.
This document discusses different cloud computing layers (IaaS, PaaS, SaaS) and how IBM Integration Bus can integrate with them. It describes how tools like Chef, IBM UrbanCode Deploy, and Bluemix PaaS can be used to automate deployment and management of IIB in cloud environments. The document also discusses how IIB can connect to SaaS applications and provide APIs to expose integration services as cloud applications.
Human processes and system automation work better together when integrated. Microsoft System Center Orchestrator allows users to automate IT processes across platforms through runbooks while integrating with human workflows in Service Manager. It provides templates and tools to define and monitor automated processes across infrastructure through connectors to various systems and platforms.
The BBC moved their OWLIM graph database to Amazon Web Services (AWS) to take ownership of OWLIM maintenance and support AWS adoption. They deployed OWLIM using AWS OpsWorks to define the infrastructure and Chef recipes to install and configure each layer. While OpsWorks and Chef provide benefits like simplicity and reuse, autoscaling is not supported and Chef recipes could be improved. Future plans include autoscaling, backup improvements, and refactoring Chef recipes.
Gentle Introduction to Semantic Enrichmentlogomachy
This talk is gentle introduction to the concept of semantic enrichment that demonstrates how publishers are using semantic technology such as Ontotext's GraphDB and publishing platform to make the most of their content.
This is a talk I gave at LT-Innovate Summit 2014 in Brussels. I'm talking about how publishers are leveraging language and semantics to create new products and services so that publisher can 'know what they know'.
Introduction to Semantics for Digital Surreylogomachy
The document discusses semantics and semantic technology. It describes semantics as being about interaction via meaning. It explains that ontologies are used to create context from which new information can be inferred. Examples are given of how semantics can be used, including for e-commerce, search engine optimization, and making use of information from different sources on the internet in a safe manner through linked data.
Awarded Second Prize at an International Event called “Present around the World in 10 minutes”, organized by IET (The Institution of Engineering and Technology) on "Future Technologies for Emerging Markets" held for the Bangalore region, May 2012. I presented on "Semantic Web - Web 3.0"
Zarafa SummerCamp 2012 - Steve Hardy Friday KeynoteZarafa
The document summarizes updates to the Zarafa development in 2012, including:
1) Expanded development teams with new offices in Ukraine and additions in India and Delft, and adopting Scrum methodology with 2-weekly releases.
2) Improvements to tracking tickets in JIRA and assessing tickets within 1 day with a focus on fixing bugs over features.
3) Increased platforms supported from 38 to including Windows, and reduced build time for all distributions from over 6 hours to under 1 hour.
4) Continued work on the WebApp, plugins, Z-Admin, integration with other services like Spreed, and contributions from an expanded international team.
Escalando Foursquare basado en Checkins y RecomendacionesManuel Vargas
1) Foursquare scaled its data storage by sharding and replicating across multiple databases as user and venue data grew significantly.
2) As the application complexity increased, Foursquare transitioned to a service-oriented architecture using Finagle for RPC but faced challenges with duplication, tracing issues, and reliability.
3) Foursquare developed common tools for builds, deploys, monitoring, tracing, and circuit breaking to help manage the increasingly distributed system and facilitate independent development of features.
Speed up Interactive Analytic Queries over Existing Big Data on Hadoop with P...viirya
This document discusses using Presto to enable interactive analytic queries over large datasets on Hadoop. Presto is a distributed SQL query engine that is optimized for fast, ad-hoc queries against data stored in various data sources like HDFS, Cassandra and MySQL. It uses a coordinator and worker architecture to parallelize query execution across clusters. The document demonstrates how to deploy and configure Presto, and provides a demo of integrating Presto with Grafana for interactive data visualization.
Revolution R Enterprise 7.4 - Presentation by Bill Jacobs 11Jun15Revolution Analytics
This document outlines several improvements and updates to ScaleR including new capabilities for DeployR, an upgraded R Engine, improved performance for various models, added support for HDFS caching and updated security features. It also notes changes to packages, platforms, and the separate installation of the R Engine.
Database Migrations with Gradle and LiquibaseDan Stine
Database migration scripts are a notorious source of difficulty in the software delivery process. This session will discuss how we neutralized this all too common headache.
Now our deployment framework executes database migrations automatically with every application deploy, and the QA team performs self-service full stack deployments in test environments. The resulting additional bandwidth has been invested in more frequent software releases, and the opportunity to focus on higher-value tasks.
Cloud Foundry Compared With Other PaaSes (Cloud Foundry Summit 2014)VMware Tanzu
Business Track presented by Michael Maximilien, Chief Architect PaaS Innovation at IBM & James Bayer, Director of Product Management, Cloud Foundry at Pivotal.
This document discusses load balancing as a service (LBaaS) in OpenStack Havana. It covers:
1. A focus for Havana will be supporting multiple load balancing technologies and vendors through LBaaS drivers while maintaining a common tenant API.
2. The architecture proposed separates the LBaaS plugin from drivers for specific load balancers. This allows different load balancing solutions like network services, virtual appliances, and hardware to be used.
3. Additional topics to be addressed for Havana include the tenant API to support multiple vendors, load balancing across networks through SNAT/DSR, and hierarchical modeling of load balancing configurations.
Managing multi tenant resource toward Hive 2.0Kai Sasaki
This document discusses Treasure Data's migration architecture for managing resources across multiple clusters when upgrading from Hive 1.x to Hive 2.0. It introduces components like PerfectQueue and Plazma that enable blue-green deployment without downtime. It also describes how automatic testing and validation is done to prevent performance degradation. Resource management is discussed to define resources per account across different job queues and Hadoop clusters. Brief performance comparisons show improvements from Hive 2.x features like Tez and vectorization.
This document discusses the Grails Resources Plugin, which provides a streamlined asset pipeline for managing static resources like CSS, JS, and images in Grails applications and plugins. It allows defining resource modules with dependencies, processing resources with an extensible pipeline, and rendering resource links with tag libraries. The plugin addresses challenges like managing dependencies between resources, bundling resources, minification, caching, and improving frontend performance.
Orchestrating Cloud Workloads with RightScale Self-Service RightScale
Organizations are seeking to drive agility by offering developers a self-service portal to access cloud resources. In order to provide push-button access to the cloud, IT DevOps teams need to orchestrate the deployment, configuration and integration of entire technology stacks or applications.
Tungsten Webinar: v6 & v7 Release Recap, and BeyondContinuent
In this webinar, our Customer Success Directors, Matthew Lang and Chris Parker present a recap of our v6 and v7 releases. Exploring the newer features of v7 and also a preview of what to expect in forthcoming releases over the next year.
AGENDA
v6 Patch Releases
v7 Release
- v7 Patch Releases
New Feature overview
- API & Security Changes
- Dynamic Active/Active (DAA)
- Distributed Datasource Groups (DDG)
- Connector in Docker
- Backup & Recovery updates
- Dashboard
- Additional Features & Enhancements
Coming Soon
SPEAKERS
Matthew Lang - Director of Customer Success at Continuent - has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scalable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.
Chris Parker - Director of Customer Success at Continuent - is based in the UK, and has over 20 years of experience working as a database administrator. Prior to joining Continuent, Chris managed large-scale Oracle and MySQL deployments at Warner Bros., BBC, and prior to joining the Continuent Team, he worked at the online fashion company, Net-A-Porter.
GDG Taipei 2020 - Cloud and On-premises Applications Integration Using Event-...Rich Lee
This document provides an overview and demonstration of integrating cloud and on-premises applications using event-driven architecture. It discusses Function-as-a-Service (FaaS) platforms like Google Cloud Functions. It also describes Kafka Connect for scalably streaming data between Apache Kafka and other systems like Google Cloud Pub/Sub using source and sink connectors. The document demonstrates configuring Pub/Sub connectors to integrate Kafka topics with Cloud Pub/Sub topics.
Migrating deployment processes and Continuous Integration at SAP SEB1 Systems GmbH
The document summarizes SAP SE's migration of their deployment processes and continuous integration to a more modern, future-proof system using tools like SLES12, Chef, GitHub, OBS, and KIWI. It overviews the software and processes used, including operating system image building with KIWI, configuration management with Chef, and version control with GitHub. The new system provides benefits like cleaner deployments, reproducibility, and maintainability compared to the previous process.
Couchbase Singapore Meetup #2: Why Developing with Couchbase is easy !! Karthik Babu Sekar
The document discusses new features and improvements in Couchbase 4.6, including timestamp-based conflict resolution for cross datacenter replication, secret management and pluggable authentication modules for security, and new CBImport and CBExport tools. It also covers updates to search and query functionality.
Cask Webinar
Date: 08/10/2016
Link to video recording: https://www.youtube.com/watch?v=XUkANr9iag0
In this webinar, Nitin Motgi, CTO of Cask, walks through the new capabilities of CDAP 3.5 and explains how your organization can benefit.
Some of the highlights include:
- Enterprise-grade security - Authentication, authorization, secure keystore for storing configurations. Plus integration with Apache Sentry and Apache Ranger.
- Preview mode - Ability to preview and debug data pipelines before deploying them.
- Joins in Cask Hydrator - Capabilities to join multiple data sources in data pipelines
- Real-time pipelines with Spark Streaming - Drag & drop real-time pipelines using Spark Streaming.
- Data usage analytics - Ability to report application usage of data sets.
- And much more!
This document discusses running MySQL on Kubernetes with Percona Kubernetes Operators. It provides an introduction to cloud native applications and Kubernetes. It then discusses the benefits and challenges of running MySQL on Kubernetes compared to database-as-a-service options. It introduces Percona Kubernetes Operators for MySQL, which help manage and configure MySQL deployments on Kubernetes. Finally, it discusses how to deploy MySQL with the Percona Kubernetes Operators, including prerequisites, connectivity, architecture, high availability, and monitoring.
Couchbase Chennai Meetup: Developing with Couchbase- made easyKarthik Babu Sekar
This session provided an overview of Couchbase Solutions and whats latest and greatest in the new release. This session also talks about how easy is to develop with Couchbase and query the database
Year in Review: Perforce 2014 Product UpdatesPerforce
Get an overview of all the key capabilities introduced in the Perforce versioning and collaboration platform this year. This is your best chance to catch-up quickly on all our 2014 enhancements.
This document discusses using Firebase as a backend solution for AngularJS applications. Firebase is a realtime database that allows storing and syncing data between clients and servers. It offers features like offline support, flexible data storage using JSON, and authentication integration. The document provides links to Firebase documentation on its REST API, integrations with other services, security rules, and tutorials for using Firebase with AngularJS applications.
Similar to Graphdb architecture and features update (20)
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
4. HA Cluster: Performance & features
• ClientAPI; Improved Sync Tx Log; Failover
• 2x faster writes compared to 5.4 (LDBC-50m)
• Tests: functional, stress, load. Extensible, open
• LVM-based backup & replication
• Plans:
– incremental backup
– Rsync-like full replication
– Integration with Connectors
2014 #4
5. WB & Connectors
• WB + Connectors are integrated into a single
GraphDB distribution
• WB + Connectors team are now part of the GraphDB
team
• WB: Query monitoring
• Separate session for the Connectors today
2014 #5
9. Explain plan
• Explain plan – “FROM onto:explain”
• http://owlim.ontotext.com/display/GraphDB6/Graph
DB-SE+Explain+Plan
• Issues:
– No subquery support
– Queries are actually executed; long-running queries are a pain
• Explain plan lite – won’t show the execution times
2014 #9
10. Explain plan
• Explain plan – “FROM onto:explain”
• http://owlim.ontotext.com/display/GraphDB6/Graph
DB-SE+Explain+Plan
• Issues:
– No subquery support
– Queries are actually executed; long-running queries are a pain
• Explain plan lite – won’t show the execution times
2014 #9