This talk focus on the Peergreen Deployment System and how we leverage the recent OSGi Resolver specification to build an efficient and extensible OSGi deployment framework.
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. Like the principle that the same source code generates the same binary, an IaC model generates the same environment every time it is applied. IaC is a key DevOps practice and is used in conjunction with Continues Delivery.
This document provides an overview of Apache Ambari, an open source framework for provisioning, managing and monitoring Hadoop clusters. It discusses Ambari's architecture and features for provisioning clusters, managing services, monitoring metrics and alerts, and extensibility through Ambari stacks, views and blueprints. The document also outlines Ambari's release cadence and upcoming features around operations, extensibility and troubleshooting insights.
Ansible- Durham Meetup: Using Ansible for Cisco ACI deploymentJoel W. King
Networks are evolving from hundreds or thousands of individual devices to the Software-Defined Network paradigm of a single fabric under a central controller.
The GUI on top of an SDN controller isn't sufficient and we will still need automation. This presentation describes how Ansible can add value to configuration management of a Cisco Application Centric Infrastructure (ACI) infrastructure. It demonstrates how Ansible modules can use the northbound REST API interface of the Application Policy Infrastructure Controller (APIC).
SmartSense is a support service from Hortonworks that monitors Hadoop clusters, identifies potential issues, and provides solutions. It enhances support subscriptions by enabling faster case resolution, proactive cluster configuration, and long-term cluster optimization. SmartSense integrates with Ambari and includes an Ambari view. It allows customers to download a bundle, install/capture analytics, and review and resolve any issues. The latest version of SmartSense is now included as part of Ambari installations and upgrades.
Turning the Heat up on DevOps: Providing a web-based editing experience aroun...Michael Elder
We’ll present a web-based editing experience around Heat Orchestration Templates. We have created a unified editing experience leveraging diagram and text-based metaphors into one seamless flow. We have also extended the Heat engine to support full-stack deployment by integrating application deployment capabilities from IBM UrbanCode Deploy. We’ll demonstrate creating ready to deploy HOT documents which capture Compute, Network, and Storage resources as well as our own extensions around software configuration and deployment resources.
We’ll describe how our solution supports three characteristics for Software Defined Environments:
- Organic: Support creating and updating environments in place as their purpose or architecture changes over time.
- Version-aware: We’ll show incorporating native scm solutions like git as part of the web-based interface to version and update templates across multiple environments.
- Fullstack Engineering: We’ll describe templates which capture cloud resources and software resources as part of a unified template which can then provision cloud resources and deploy software in one action.
Our extensions to Heat will be described along with our experiences in extending the engine as a vendor.
Serverless technologies and capabilities are here and are accessible now more than ever.
The power of infinite scale and system capabilities has never been more accessible. This also affects traditional front end development as serverless technologies allow for easy construction of backend support for any frontend with ease and simplicity.
In this talk, we will demonstrate how to build a fully functional Graphql endpoint for FE applications using Apollo Server and Client libraries, utilizing different cloud providers. We will also demonstrate the usage of Servless.com framework to set up the required infrastructure as code to simplify and support this setup
The video of the presentation (Hebrew):
https://youtu.be/8ba4cpdtK-8
Managing your Hadoop Clusters with Apache AmbariDataWorks Summit
Deploying, configuring, and managing large Apache Hadoop and HBase clusters can be quite complex. Once you have your clusters, keeping them up and running and making sure that the SLAs are met presents even more challenges and headaches to Hadoop operators. To make matters worse, managing upgrades can be a nightmare. Hadoop users are presented with their own fair share of difficulties such as slow running jobs and not knowing why they are slow. For third-party software vendors interested in incorporating Hadoop management and monitoring capabilities, there does not seem to be an obvious, easy solution. Apache Ambari is aimed at making lives of Hadoop operators, users, and integrators simpler by providing a management interface to do all of that and more. This session presents usages of Ambari`s Web UI for Hadoop operators (deploying, managing, and monitoring) as well as Hadoop users (job analytics). The talk will also touch upon Ambari`s REST API and how it is used in the real world. The session concludes by revealing the future roadmap of Ambari including queue management, upgrade, disaster recovery, high availability, and more.
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. Like the principle that the same source code generates the same binary, an IaC model generates the same environment every time it is applied. IaC is a key DevOps practice and is used in conjunction with Continues Delivery.
This document provides an overview of Apache Ambari, an open source framework for provisioning, managing and monitoring Hadoop clusters. It discusses Ambari's architecture and features for provisioning clusters, managing services, monitoring metrics and alerts, and extensibility through Ambari stacks, views and blueprints. The document also outlines Ambari's release cadence and upcoming features around operations, extensibility and troubleshooting insights.
Ansible- Durham Meetup: Using Ansible for Cisco ACI deploymentJoel W. King
Networks are evolving from hundreds or thousands of individual devices to the Software-Defined Network paradigm of a single fabric under a central controller.
The GUI on top of an SDN controller isn't sufficient and we will still need automation. This presentation describes how Ansible can add value to configuration management of a Cisco Application Centric Infrastructure (ACI) infrastructure. It demonstrates how Ansible modules can use the northbound REST API interface of the Application Policy Infrastructure Controller (APIC).
SmartSense is a support service from Hortonworks that monitors Hadoop clusters, identifies potential issues, and provides solutions. It enhances support subscriptions by enabling faster case resolution, proactive cluster configuration, and long-term cluster optimization. SmartSense integrates with Ambari and includes an Ambari view. It allows customers to download a bundle, install/capture analytics, and review and resolve any issues. The latest version of SmartSense is now included as part of Ambari installations and upgrades.
Turning the Heat up on DevOps: Providing a web-based editing experience aroun...Michael Elder
We’ll present a web-based editing experience around Heat Orchestration Templates. We have created a unified editing experience leveraging diagram and text-based metaphors into one seamless flow. We have also extended the Heat engine to support full-stack deployment by integrating application deployment capabilities from IBM UrbanCode Deploy. We’ll demonstrate creating ready to deploy HOT documents which capture Compute, Network, and Storage resources as well as our own extensions around software configuration and deployment resources.
We’ll describe how our solution supports three characteristics for Software Defined Environments:
- Organic: Support creating and updating environments in place as their purpose or architecture changes over time.
- Version-aware: We’ll show incorporating native scm solutions like git as part of the web-based interface to version and update templates across multiple environments.
- Fullstack Engineering: We’ll describe templates which capture cloud resources and software resources as part of a unified template which can then provision cloud resources and deploy software in one action.
Our extensions to Heat will be described along with our experiences in extending the engine as a vendor.
Serverless technologies and capabilities are here and are accessible now more than ever.
The power of infinite scale and system capabilities has never been more accessible. This also affects traditional front end development as serverless technologies allow for easy construction of backend support for any frontend with ease and simplicity.
In this talk, we will demonstrate how to build a fully functional Graphql endpoint for FE applications using Apollo Server and Client libraries, utilizing different cloud providers. We will also demonstrate the usage of Servless.com framework to set up the required infrastructure as code to simplify and support this setup
The video of the presentation (Hebrew):
https://youtu.be/8ba4cpdtK-8
Managing your Hadoop Clusters with Apache AmbariDataWorks Summit
Deploying, configuring, and managing large Apache Hadoop and HBase clusters can be quite complex. Once you have your clusters, keeping them up and running and making sure that the SLAs are met presents even more challenges and headaches to Hadoop operators. To make matters worse, managing upgrades can be a nightmare. Hadoop users are presented with their own fair share of difficulties such as slow running jobs and not knowing why they are slow. For third-party software vendors interested in incorporating Hadoop management and monitoring capabilities, there does not seem to be an obvious, easy solution. Apache Ambari is aimed at making lives of Hadoop operators, users, and integrators simpler by providing a management interface to do all of that and more. This session presents usages of Ambari`s Web UI for Hadoop operators (deploying, managing, and monitoring) as well as Hadoop users (job analytics). The talk will also touch upon Ambari`s REST API and how it is used in the real world. The session concludes by revealing the future roadmap of Ambari including queue management, upgrade, disaster recovery, high availability, and more.
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 29, 2015.
http://sched.co/49vI
This talk will cover the pros and cons of four different OpenStack deployment mechanisms. Puppet, Chef, Ansible, and Salt for OpenStack all claim to make it much easier to configure and maintain hundreds of OpenStack deployment resources. With the advent of large-scale, highly available OpenStack deployments spread across multiple global regions, the choice of which deployment methodology to use has become more and more relevant.
Beyond the initial day-one deployment, when it comes to the day-two and beyond questions of updating and upgrading existing OpenStack deployments, it becomes all the more important choose the right tool.
Come join the Bluebox and IBM team to discuss the pros and cons of these approaches. We look at each of these four tools in depth, explore their design and function, and determine which scores higher than others to address your particular deployment needs.
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Paul Czarkowski - Cloud Engineer at Blue Box, an IBM company
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
The document provides an overview of OpenStack, an open source cloud operating system. It describes the key OpenStack services including Compute, Object Storage, Image Service, Networking, Dashboard, Orchestration, Database, and Data Processing. Popular use cases for OpenStack include bursting workloads from private to public clouds for extra capacity, and using multiple clouds for high availability and disaster recovery.
One tool, two fabrics: Ansible and Nexus 9000Joel W. King
Ansible can be used to automate configuration of Cisco Nexus 9000 series switches running either NX-OS or Application Centric Infrastructure (ACI). It allows using YAML files, Jinja templates, and Python modules to provision and manage network infrastructure without relying on CLI commands. The presentation demonstrated using Ansible roles to configure NTP servers and backup settings for an ACI fabric by specifying variables in a CSV file and generating XML configuration files from templates.
With the recent apparition of Kubernetes operators for Oracle, you have now the possibility to easily deploy and handle an Oracle database on a Kubernetes cluster.
Why you could want this? How do you do it? Which features those operators provide? Is it portable between the native Kubernetes cloud services offered by OCI, GCP, and AWS, and on-premises deployments?.
In this session, we will explore the basics of Kubernetes, the implications of running an Oracle database on it, and test the Google ElCarro and Oracle operators for Kubernetes in GCP, AWS, and OCI.
ARCA is an affordable turnkey disaster recovery appliance that provides enterprise-level business continuity for servers. It replicates servers and allows rapid failover in minutes. ARCA includes all necessary backup, replication, and management software along with a 3-year warranty for a total cost of $2,500-$11,000 or $120-$550 per month.
This document summarizes release notes and upcoming features for Apache Ambari. Key points include: Ambari 1.2.4 added non-root SSH access, Oracle database support, and customizable user accounts. Version 1.2.5 added Kerberos support, dashboard widgets, and security enhancements. Upcoming versions will add support for Hadoop 2.0, service blueprints, high availability, and integration with Microsoft SCOM for cluster monitoring.
Building a REST Service in minutes with Spring BootOmri Spector
A walk through building a micro service using Spring Boot.
Deck presented at Java 2016
Source accompanying presentation can be found at https://github.com/ospector/sbdemo
SharePoint 24x7x365 Architecting for High Availability, Fault Tolerance and D...Eric Shupps
Building SharePoint farms for development and testing is easy. But building highly available farms to meet enterprise service level agreements that are fault tolerant, scalable and fully recoverable? Not so simple. Learn how to plan, design and implement a highly available on-premises farm architecture for 2016 and 2019 using proven, field-tested techniques and practical guidance.
Rackspace Private Cloud presentation for ChefConf 2013Joe Breu
Rackspace uses Chef and other operational tools to automate the deployment of OpenStack. They developed their own cookbooks to model real-world deployments across multiple operating systems and to handle updates and new OpenStack services. Over time, they added capabilities like high availability and vendor integration. To simplify deployment and reduce operator overhead, they created OpenCenter, which automates tasks through a solver and provides an API. OpenCenter lowers the OpenStack knowledge needed by operators.
Sylvain Utard presents on Build Realtime Search. Realtime Search is a search API provider that processes 2 billion operations per month across 30+ servers. The backend is built in C++, analytics in Java, and the website in Rails. It supports 12 API clients in various languages. Realtime Search provides an average query time of less than 10ms across multiple datacenters globally. The REST API is CORS compliant for use in JavaScript applications and aims for an overall end-user latency of less than 100ms.
This document provides an overview of DevOps with Swapnil Jain. It introduces Swapnil and his background, then covers an agenda on Ansible including an introduction, use cases, architecture, modules demo, playbook demo, Ansible Tower features, and Tower demo. Ansible is introduced as an open source configuration management and orchestration tool that can automate and standardize remote host configuration. Common use cases include provisioning, configuration management, application deployment, continuous delivery, security and compliance, and orchestration.
Pie is the fastest, easiest, and most fun chat app you’ll ever use for work. In this presentation we reveal how Pie uses Amazon Web Services and open source software tools to power its realtime chat platform.
Spark HBase Connector: Feature Rich and Efficient Access to HBase Through Spa...Databricks
Both Spark and HBase are widely used, but how to use them together with high performance and simplicity is a very hard topic. Spark HBase Connector (SHC) provides feature-rich and efficient access to HBase through Spark SQL. It bridges the gap between the simple HBase key value store and complex relational SQL queries, and enables users to perform complex data analytics on top of HBase using Spark.
SHC implements the standard Spark data source APIs, and leverages the Spark catalyst engine for query optimization. To achieve high performance, SHC constructs the RDD from scratch instead of using the standard HadoopRDD. With the customized RDD, all critical techniques can be applied and fully implemented, such as partition pruning, column pruning, predicate pushdown and data locality. The design makes the maintenance very easy, while achieving a good tradeoff between performance and simplicity. Also, SHC has integrated natively with Phoenix data types. With SHC, Spark can execute batch jobs to read/write data from/into Phoenix tables. Phoenix can also read/write data from/into HBase tables created by SHC. For example, users can run a complex SQL query on top of an HBase table created by Phoenix inside Spark, perform a table join against a DataFrame which reads the data from a Hive table, or integrate with Spark Streaming to implement a more complicated system.
This session will demonstrate how SHC works, how to use SHC in secure/non-secure clusters, how SHC works with multi-HBase clusters and how Spark reads/writes data from/into Phoenix tables with SHC, etc. It will also benefit people who use Spark and other data sources (besides HBase) as it inspires them with ideas of how to support high performance data source access at the Spark DataFrame level.
Haute Disponibilité et Reprise sur incidents en SharePoint 2013 avec Sql Serv...serge luca
This document discusses using SQL Server Always On Availability Groups to provide high availability and disaster recovery for SharePoint 2013. It begins with an agenda and overview of concepts like RPO, RTO and SLA. It then covers SharePoint 2013 architecture and how Always On Availability Groups can be used to configure a SharePoint farm for either synchronous high availability across two nodes or asynchronous disaster recovery across two separate SharePoint farms. It concludes with considerations for search backup/restore and limitations.
Heat is a project that provides orchestration of multiple cloud applications and resources for OpenStack. It allows users to define infrastructure in templates and provision resources like instances, volumes, and security groups. Key features include integration with other OpenStack projects, compatibility with AWS CloudFormation templates and API, and advanced services like auto-scaling and high availability. The project is actively developed by an open source community with the goal of becoming part of the OpenStack core.
Livy is an open source REST interface for interacting with Apache Spark clusters. It allows submitting Spark jobs via REST from anywhere and manages Spark contexts. Key features include interactive shells for Scala, Python and R; batch job submission; handling multiple jobs simultaneously; and using existing code by interfacing with a predefined Spark context. Livy also integrates with Jupyter notebooks and supports sharing cached data between jobs. It provides security via user impersonation and communication encryption.
This document discusses how to use Oracle Exachk and Oracle Enterprise Manager 12c to monitor Oracle Exadata systems. It describes how to run Exachk regularly to check the health of Exadata and review the output reports. It also explains how to configure Oracle EM 12c with the Engineered System Healthchecks plug-in to monitor Exadata and generate alerts. Running Exachk through the daemon allows it to run across all cluster nodes. The plug-in places results in EM metrics and templates can disable unwanted alerts.
Akka in Practice: Designing Actor-based ApplicationsNLJUG
This document provides an overview of real world application design patterns for intermediate Akka developers. It discusses various patterns such as booting up an Akka app, creating a receptionist actor to handle external requests, creating child actors, initializing actor state using messages and become, configuring Akka apps, using the event stream to communicate between actors, and handling complex request/response flows. The presentation aims to demonstrate practical techniques for building robust Akka applications.
Coscup 2013 : Continuous Integration on top of hadoopWisely chen
This document discusses implementing continuous integration (CI) for Hadoop projects. It describes problems with debugging and assessing performance of MapReduce jobs. The proposed solution is to set up a CI system for Hadoop that automates unit testing, performance testing, documentation generation and deployment. This allows developers to catch issues early before deploying to production and improves productivity. Demo examples are provided of the CI system failing and passing unit tests and assessing performance.
The document provides an overview of Google App Engine (GAE) for running Java applications on cloud platforms. It discusses that in GAE, developers do not manage machines directly and instead upload binaries for GAE to run. It describes various services available in GAE like data storage, processing images, and cron jobs. The document also summarizes tools for local development and deployment, limitations of GAE around filesystem and socket access, and advantages like built-in logging and routing by domain headers.
BYOP: Custom Processor Development with Apache NiFiDataWorks Summit
Apache NiFi, a robust, scalable, and secure tool for data flow management, ships with over 212 processors to ingest, route, manipulate, and exfil data from a variety of sources and consumers. But many users turn to NiFi to meet unusual requirements — from proprietary protocol parsing, to running inside connected cars, to offloading massive hardware metrics from oil rigs in the most remote environments. Rather than posting a community request for custom development or offloading unusual demands to unnecessary external systems, there’s an answer in NiFi. Learn how NiFi allows you to quickly prototype custom processors in the scripting language of your choice against live production data without affecting your existing flows. Easily translate prototypes to full-fledged processors to optimize performance and leverage the full provenance reporting infrastructure. Discover how the framework provides conventions to streamline your development and minimize common boilerplate code, and the robust testing framework to make testing easy, and dare we say, fun.
Expected prior knowledge / intended audience: developers and data flow managers should have passing knowledge of Apache NiFi as a platform for routing, transforming, and delivering data through systems (a brief overview will be provided). The intended audience will have experience with programming in Groovy, Ruby, Jython, ECMAScript/Javascript, or Lua.
Takeaways: Attendees will gain an understanding in writing custom processors for Apache NiFi, including the component lifecycle, unit and integration testing, quick prototyping using a scripting language of their choice, and the artifact publishing and deployment process.
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 29, 2015.
http://sched.co/49vI
This talk will cover the pros and cons of four different OpenStack deployment mechanisms. Puppet, Chef, Ansible, and Salt for OpenStack all claim to make it much easier to configure and maintain hundreds of OpenStack deployment resources. With the advent of large-scale, highly available OpenStack deployments spread across multiple global regions, the choice of which deployment methodology to use has become more and more relevant.
Beyond the initial day-one deployment, when it comes to the day-two and beyond questions of updating and upgrading existing OpenStack deployments, it becomes all the more important choose the right tool.
Come join the Bluebox and IBM team to discuss the pros and cons of these approaches. We look at each of these four tools in depth, explore their design and function, and determine which scores higher than others to address your particular deployment needs.
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Paul Czarkowski - Cloud Engineer at Blue Box, an IBM company
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
The document provides an overview of OpenStack, an open source cloud operating system. It describes the key OpenStack services including Compute, Object Storage, Image Service, Networking, Dashboard, Orchestration, Database, and Data Processing. Popular use cases for OpenStack include bursting workloads from private to public clouds for extra capacity, and using multiple clouds for high availability and disaster recovery.
One tool, two fabrics: Ansible and Nexus 9000Joel W. King
Ansible can be used to automate configuration of Cisco Nexus 9000 series switches running either NX-OS or Application Centric Infrastructure (ACI). It allows using YAML files, Jinja templates, and Python modules to provision and manage network infrastructure without relying on CLI commands. The presentation demonstrated using Ansible roles to configure NTP servers and backup settings for an ACI fabric by specifying variables in a CSV file and generating XML configuration files from templates.
With the recent apparition of Kubernetes operators for Oracle, you have now the possibility to easily deploy and handle an Oracle database on a Kubernetes cluster.
Why you could want this? How do you do it? Which features those operators provide? Is it portable between the native Kubernetes cloud services offered by OCI, GCP, and AWS, and on-premises deployments?.
In this session, we will explore the basics of Kubernetes, the implications of running an Oracle database on it, and test the Google ElCarro and Oracle operators for Kubernetes in GCP, AWS, and OCI.
ARCA is an affordable turnkey disaster recovery appliance that provides enterprise-level business continuity for servers. It replicates servers and allows rapid failover in minutes. ARCA includes all necessary backup, replication, and management software along with a 3-year warranty for a total cost of $2,500-$11,000 or $120-$550 per month.
This document summarizes release notes and upcoming features for Apache Ambari. Key points include: Ambari 1.2.4 added non-root SSH access, Oracle database support, and customizable user accounts. Version 1.2.5 added Kerberos support, dashboard widgets, and security enhancements. Upcoming versions will add support for Hadoop 2.0, service blueprints, high availability, and integration with Microsoft SCOM for cluster monitoring.
Building a REST Service in minutes with Spring BootOmri Spector
A walk through building a micro service using Spring Boot.
Deck presented at Java 2016
Source accompanying presentation can be found at https://github.com/ospector/sbdemo
SharePoint 24x7x365 Architecting for High Availability, Fault Tolerance and D...Eric Shupps
Building SharePoint farms for development and testing is easy. But building highly available farms to meet enterprise service level agreements that are fault tolerant, scalable and fully recoverable? Not so simple. Learn how to plan, design and implement a highly available on-premises farm architecture for 2016 and 2019 using proven, field-tested techniques and practical guidance.
Rackspace Private Cloud presentation for ChefConf 2013Joe Breu
Rackspace uses Chef and other operational tools to automate the deployment of OpenStack. They developed their own cookbooks to model real-world deployments across multiple operating systems and to handle updates and new OpenStack services. Over time, they added capabilities like high availability and vendor integration. To simplify deployment and reduce operator overhead, they created OpenCenter, which automates tasks through a solver and provides an API. OpenCenter lowers the OpenStack knowledge needed by operators.
Sylvain Utard presents on Build Realtime Search. Realtime Search is a search API provider that processes 2 billion operations per month across 30+ servers. The backend is built in C++, analytics in Java, and the website in Rails. It supports 12 API clients in various languages. Realtime Search provides an average query time of less than 10ms across multiple datacenters globally. The REST API is CORS compliant for use in JavaScript applications and aims for an overall end-user latency of less than 100ms.
This document provides an overview of DevOps with Swapnil Jain. It introduces Swapnil and his background, then covers an agenda on Ansible including an introduction, use cases, architecture, modules demo, playbook demo, Ansible Tower features, and Tower demo. Ansible is introduced as an open source configuration management and orchestration tool that can automate and standardize remote host configuration. Common use cases include provisioning, configuration management, application deployment, continuous delivery, security and compliance, and orchestration.
Pie is the fastest, easiest, and most fun chat app you’ll ever use for work. In this presentation we reveal how Pie uses Amazon Web Services and open source software tools to power its realtime chat platform.
Spark HBase Connector: Feature Rich and Efficient Access to HBase Through Spa...Databricks
Both Spark and HBase are widely used, but how to use them together with high performance and simplicity is a very hard topic. Spark HBase Connector (SHC) provides feature-rich and efficient access to HBase through Spark SQL. It bridges the gap between the simple HBase key value store and complex relational SQL queries, and enables users to perform complex data analytics on top of HBase using Spark.
SHC implements the standard Spark data source APIs, and leverages the Spark catalyst engine for query optimization. To achieve high performance, SHC constructs the RDD from scratch instead of using the standard HadoopRDD. With the customized RDD, all critical techniques can be applied and fully implemented, such as partition pruning, column pruning, predicate pushdown and data locality. The design makes the maintenance very easy, while achieving a good tradeoff between performance and simplicity. Also, SHC has integrated natively with Phoenix data types. With SHC, Spark can execute batch jobs to read/write data from/into Phoenix tables. Phoenix can also read/write data from/into HBase tables created by SHC. For example, users can run a complex SQL query on top of an HBase table created by Phoenix inside Spark, perform a table join against a DataFrame which reads the data from a Hive table, or integrate with Spark Streaming to implement a more complicated system.
This session will demonstrate how SHC works, how to use SHC in secure/non-secure clusters, how SHC works with multi-HBase clusters and how Spark reads/writes data from/into Phoenix tables with SHC, etc. It will also benefit people who use Spark and other data sources (besides HBase) as it inspires them with ideas of how to support high performance data source access at the Spark DataFrame level.
Haute Disponibilité et Reprise sur incidents en SharePoint 2013 avec Sql Serv...serge luca
This document discusses using SQL Server Always On Availability Groups to provide high availability and disaster recovery for SharePoint 2013. It begins with an agenda and overview of concepts like RPO, RTO and SLA. It then covers SharePoint 2013 architecture and how Always On Availability Groups can be used to configure a SharePoint farm for either synchronous high availability across two nodes or asynchronous disaster recovery across two separate SharePoint farms. It concludes with considerations for search backup/restore and limitations.
Heat is a project that provides orchestration of multiple cloud applications and resources for OpenStack. It allows users to define infrastructure in templates and provision resources like instances, volumes, and security groups. Key features include integration with other OpenStack projects, compatibility with AWS CloudFormation templates and API, and advanced services like auto-scaling and high availability. The project is actively developed by an open source community with the goal of becoming part of the OpenStack core.
Livy is an open source REST interface for interacting with Apache Spark clusters. It allows submitting Spark jobs via REST from anywhere and manages Spark contexts. Key features include interactive shells for Scala, Python and R; batch job submission; handling multiple jobs simultaneously; and using existing code by interfacing with a predefined Spark context. Livy also integrates with Jupyter notebooks and supports sharing cached data between jobs. It provides security via user impersonation and communication encryption.
This document discusses how to use Oracle Exachk and Oracle Enterprise Manager 12c to monitor Oracle Exadata systems. It describes how to run Exachk regularly to check the health of Exadata and review the output reports. It also explains how to configure Oracle EM 12c with the Engineered System Healthchecks plug-in to monitor Exadata and generate alerts. Running Exachk through the daemon allows it to run across all cluster nodes. The plug-in places results in EM metrics and templates can disable unwanted alerts.
Akka in Practice: Designing Actor-based ApplicationsNLJUG
This document provides an overview of real world application design patterns for intermediate Akka developers. It discusses various patterns such as booting up an Akka app, creating a receptionist actor to handle external requests, creating child actors, initializing actor state using messages and become, configuring Akka apps, using the event stream to communicate between actors, and handling complex request/response flows. The presentation aims to demonstrate practical techniques for building robust Akka applications.
Coscup 2013 : Continuous Integration on top of hadoopWisely chen
This document discusses implementing continuous integration (CI) for Hadoop projects. It describes problems with debugging and assessing performance of MapReduce jobs. The proposed solution is to set up a CI system for Hadoop that automates unit testing, performance testing, documentation generation and deployment. This allows developers to catch issues early before deploying to production and improves productivity. Demo examples are provided of the CI system failing and passing unit tests and assessing performance.
The document provides an overview of Google App Engine (GAE) for running Java applications on cloud platforms. It discusses that in GAE, developers do not manage machines directly and instead upload binaries for GAE to run. It describes various services available in GAE like data storage, processing images, and cron jobs. The document also summarizes tools for local development and deployment, limitations of GAE around filesystem and socket access, and advantages like built-in logging and routing by domain headers.
BYOP: Custom Processor Development with Apache NiFiDataWorks Summit
Apache NiFi, a robust, scalable, and secure tool for data flow management, ships with over 212 processors to ingest, route, manipulate, and exfil data from a variety of sources and consumers. But many users turn to NiFi to meet unusual requirements — from proprietary protocol parsing, to running inside connected cars, to offloading massive hardware metrics from oil rigs in the most remote environments. Rather than posting a community request for custom development or offloading unusual demands to unnecessary external systems, there’s an answer in NiFi. Learn how NiFi allows you to quickly prototype custom processors in the scripting language of your choice against live production data without affecting your existing flows. Easily translate prototypes to full-fledged processors to optimize performance and leverage the full provenance reporting infrastructure. Discover how the framework provides conventions to streamline your development and minimize common boilerplate code, and the robust testing framework to make testing easy, and dare we say, fun.
Expected prior knowledge / intended audience: developers and data flow managers should have passing knowledge of Apache NiFi as a platform for routing, transforming, and delivering data through systems (a brief overview will be provided). The intended audience will have experience with programming in Groovy, Ruby, Jython, ECMAScript/Javascript, or Lua.
Takeaways: Attendees will gain an understanding in writing custom processors for Apache NiFi, including the component lifecycle, unit and integration testing, quick prototyping using a scripting language of their choice, and the artifact publishing and deployment process.
SAP TechEd 2013 session Tec118 managing your-environmentChris Kernaghan
This document discusses configuration management and provides examples of using Puppet and Chef for configuration management. It defines configuration management as managing the configuration of systems from hardware to applications. It explains that configuration management allows automating repetitive system administration tasks in a scheduled, consistent, auditable, and repeatable way. The document compares Puppet and Chef and provides examples of configuration scripts for each tool. It demos how to use Puppet and Chef to configure a system.
This document discusses plans for JAX-RS 2.1 (also known as JAX-RS.next), which aims to improve the JAX-RS specification. Some key areas of focus include improving performance through reactive programming and streams, better aligning with Java EE standards like CDI and JSON-B, filling gaps like server-sent events support, and continued evolution of the API. The overview outlines proposed features and the agenda for an upcoming presentation on the topic.
This document discusses WebSocket and its use in enterprise applications. It provides an overview of WebSocket, including when it should and should not be used. It also describes the Java API for WebSocket and the Tyrus project, including Tyrus features like security, broadcasting, monitoring, tracing, and clustering support using Oracle Coherence.
Apache Tajo: A Big Data Warehouse System on Hadoop
Presented by Jae-hwa Jeong, Apache Tajo committer and senior research engineer at Gruter, in Bigdata World Convention 2014 at Oct.23, Busan, Korea
My talk delivered on 10th of April 2014 in Bristol at ACCU Conference.
This is the combination of a few talks I delivered over 2012 and 2013 with some latest updates.
This is an experience report based on the work of many developers from Atlassian and Spartez working for years on Atlassian JIRA.
If you have (or going to have) thousands of automated tests and you are interested how it may impact you, this presentation is for you.
OSGi Enterprise R6 specs are out! - David Bosschaert & Carsten Ziegelermfrancis
OSGi Community Event 2015
The Enterprise OSGi Specs R6 have been released this summer. There is a lot of good stuff in there! Asynchronous Services, REST management, HTTP Whiteboard, cool DS enhancements and much more. In this talk David and Carsten will give an overview of the new technologies so you can get started with it right away.
Provisioning with Oracle Cloud Stack ManagerSimon Haslam
It’s easy to provision individual Oracle Cloud services, such as databases or Java application servers, from the instance creation pages in the Oracle cloud consoles. This presentation describes Oracle's Cloud Stack Manager tool which can be used with Oracle Cloud Infrastructure Classic to provision full sets of cloud services in a fully automated and repeatable manner.
Presentation was first delivered at Oracle's PaaS Forum in Budapest in March 18.
OUGLS 2016: Guided Tour On The MySQL Source CodeGeorgi Kodinov
We will go over the layout of the MySQL code base, roughly following the query execution path. We will also cover how to extend MySQL with both built-in and pluggable add-ons.
The document discusses the Servlet 4.0 specification led by Ed Burns and Dr. Shing-Wai Chan. It provides an overview of the major new features of HTTP/2 including request/response multiplexing, binary framing, stream prioritization, server push, and header compression. It then outlines how features like server push could potentially be exposed through the Servlet API in Servlet 4.0. It concludes with an invitation for the community to contribute to the JSR-369 page by providing a list of JIRA components, use cases for sessionless applications, and references to async and thread safety in the specification and documentation.
Stateful Interaction In Serverless Architecture With Redis: Pyounguk ChoRedis Labs
This presentation discusses how to bring stateful behaviors to serverless architecture using Redis. It introduces the problem of enabling statefulness in serverless applications and proposes using Redis as a solution. Key considerations for the Redis-based architectural approach are discussed, including topology, high availability and scalability, and Redis configuration tuning. A demo is then presented to illustrate "Redis in Serverless" in action.
The document provides a roadmap for Oracle Coherence, including upcoming versions, new capabilities, and use cases. Key points include:
- Coherence 19 will support JDK 8 and 11 and include new development, runtime, and cloud capabilities.
- Common use cases for Coherence include as a web application platform, fast data platform, and for offloading backend/legacy systems.
- New features in recent/upcoming versions include persistence, federated caching, security improvements, and leveraging Java 8 features like lambdas and streams.
The document discusses Oracle Database Cloud Service, which allows users to quickly create databases using automated provisioning and easily move data and workloads between on-premise and cloud environments. It highlights the unified management capabilities of Enterprise Manager to manage databases across on-premise and cloud environments using the same architecture, software, and skills.
Enterprise Ready Test Execution Platform for Mobile AppsVijayan Srinivasan
When it comes to Mobile test execution, appium framework is the default choice of engineers for writing test cases. Running the appium testcases against multiple Android versions in parallel can be achieved via another open source tool called selenium grid.
Unfortunately selenium grid is not enterprise ready. Meaning the selenium grid cannot be used as a single test execution platform across enterprise level companies due to following issues
• Not available as a Web Application to run from Intuit Standard Containers (Tomcat, WHP)
• Device registry is maintained in-memory
• No support for High Availability / Disaster Recovery
• No support for External Device Cloud
• Not much debugging support (Screenshot, Exception or Log messages)
This talk will be covering the limitations of selenium grid and how Intuit modified the selenium grid to suit for enterprise needs.
When it comes to microservice architecture, sometimes all you wanted is to perform cross cutting concerns ( logging, authentication , caching, CORS, Routing, load balancing , exception handling , tracing, resiliency etc..) and also there might be a scenario where you wanted to perform certain manipulations on your request payload before hitting into your actual handler. And this should not be a repetitive code in each of the services , so all you might need is a single place to orchestrate all these concerns and that is where Middleware comes into the picture. In the demo I will be covering how to orchestrate these cross cutting concerns by using Azure functions as a Serverless model.
Similar to EclipseCon FR : Ignite talks, OSGi Resolver in action (20)
La thématique de protection des données personnelles prend de plus en plus d'importance tandis que nos vies réelles et nos vies numériques se confondent.
Les gouvernements commencent à prendre cette préoccupation au sérieux et des lois contraignant les acteurs du numérique émergent:
* Droit à l'oubli
* Information sur les cookies
* GDPR en Europe
On le voit bien, ces lois ont des implications techniques.
On verra quelles sont les techniques utilisées actuellement pour répondre à ces contraintes, les standards émergeants (UMA 2), ...
En un mot, comment redonner le contrôle aux utilisateurs sur leurs données.
OAuth 2.0 est un standard d'autorisation moderne (comprendre avec du JSON partout) qui permet de controller l'accès aux resources web. Cette présentation vous apprendra les pas de danse OAuth 2.0, et vous initiera à la chorégraphie OpenId Connect. On parlera aussi des nouveautés: UMA, PoP, Privacy, Consent et autres acronymes barbares.
The document describes the Peergreen Platform. It provides an overview of Peergreen as a startup with experienced engineers and open source contributions. It then discusses in depth the guidelines, boot process, deployment system, shell, web integration, console, security features, and development tools of the Peergreen Platform. Finally, it outlines next steps such as adding Java Transaction and Persistence API support.
OW2 Utilities is a newly accepted project in the consortium. It aims to be the OW2 toolkit catalog for common pieces of code that everybody rewrite for each new project. This presentation will start with a description of the goals of that project, a little bit of history, then we will explain why it's important to maximize re-use within the consortium (reliability, ...). In a second part we will focus to the most useful and/or interesting modules provided by this project. This session will be developer-oriented, with code samples and effective use cases.
Windows Azure est une plateforme IaaS qui n'est pas réservée exclusivement aux application .NET. Cette session explore et explique comment déployer le serveur d'application JOnAS sur le cloud de Microsoft
The document discusses leveraging OSGi in Java EE business applications using JOnAS. It describes how OSGi can help build modular applications and introduces the benefits of a service-oriented approach. It also explains how hybrid applications can use the best of OSGi and Java EE, and how JOnAS is built on OSGi to provide Java EE services through an OSGi framework. This allows Java EE components and OSGi bundles to access each other's services.
The document outlines a collaboration between Peking University, Bull SAS, and CVIC SE to enhance the open source JOnAS application server. Peking University and Bull SAS will contribute improvements to the OW2 JOnAS project. The collaboration aims to develop business through joint projects utilizing JOnAS and to promote the technology in China and Europe.
The document discusses OSGi, an open-services ecosystem being developed to enable different devices and software to work together seamlessly. It outlines the objectives of OSGi including designing an adaptable global platform to maximize interoperability. It then provides an overview of the technologies being used like OSGi, OW2 μJOnAS, and Apache Felix. It zooms in on the French consortium's use case involving integrating heterogeneous sensors and actuators for energy efficiency in smart homes and buildings. A common demonstrator is proposed to showcase the components' interoperability in a smart home scenario.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.