This HowTo is about how to manage Public- and Hybrid Cloud deployments with openQRM. As the deployment manager for Amazon EC2 and its API compatible derivatives (e.g Eucalyptus) openQRM is capable to fully automate Instance provisioning and to add additional value by attaching automated application deployment via Puppet, automated monitoring via Nagios and also highavailability on Infrastructure-Level to the providers cloud features. The whole workflow of Instance-deployment in openQRM is exactly the same as for local resources in the internal IT-environment.
The goal of this HowTo is to have a single system setup with the Ubuntu Enterprise Cloud (UEC) and the openQRM Cloud. This system will allow to migrate services from the UEC and Amazon EC2 to the openQRM Cloud and from openQRM Cloud to UEC and Amazon EC2.
This HowTo is about how to create and manage Xen Virtual Machines on Debian 7 aka Wheezy with openQRM 5.1. This is the first howto which requires 2 systems and it shows how to integrate additional, existing, local-installed server into openQRM by the example of adding a Xen Host into an existing openQRM environment.
Accompanying slides to Matt Rechenburg's tutorial talk at LOADays 2013. Setting up an openQRM cloud to automate everything that's painful and time-consuming in running a modern, virtualized datacenter. Please see the openQRM 5.0 Admin Guide to setup your own private cloud computing/IaaS platform for free, using openQRM 5.0 Community Edition! => http://www.openqrm-enterprise.com/adminguide
OpenNebulaConf2017EU: One (Windows) Image to Rule them All by Paul Batchelor,...OpenNebula Project
Describes a fully-automated system for building fully-patched windows gold images on-demand or on a schedule. It is possible to build and use the same image on OpenNebula, as well as public clouds such as Azure and AWS (with appropriate changes to handle the different contextualization methods in the target clouds). Uses Microsoft MDT to build the image, and Jenkins build server to manage the orchestration in OpenNebula, monitoring the build process, and publishing the completed images to the OpenNebula marketplace and the public clouds.
YouTube: https://youtu.be/owUpj8WHMQo
An on-going presentation for the Docker workshop on how to integrate docker into Vagrant as a provider. In order to remove the requirement of having a VM, and speedup development environments. It also features Puppet as the configuration management system.
The code can be found in: https://github.com/npoggi/vagrant-docker
The goal of this HowTo is to have a single system setup with the Ubuntu Enterprise Cloud (UEC) and the openQRM Cloud. This system will allow to migrate services from the UEC and Amazon EC2 to the openQRM Cloud and from openQRM Cloud to UEC and Amazon EC2.
This HowTo is about how to create and manage Xen Virtual Machines on Debian 7 aka Wheezy with openQRM 5.1. This is the first howto which requires 2 systems and it shows how to integrate additional, existing, local-installed server into openQRM by the example of adding a Xen Host into an existing openQRM environment.
Accompanying slides to Matt Rechenburg's tutorial talk at LOADays 2013. Setting up an openQRM cloud to automate everything that's painful and time-consuming in running a modern, virtualized datacenter. Please see the openQRM 5.0 Admin Guide to setup your own private cloud computing/IaaS platform for free, using openQRM 5.0 Community Edition! => http://www.openqrm-enterprise.com/adminguide
OpenNebulaConf2017EU: One (Windows) Image to Rule them All by Paul Batchelor,...OpenNebula Project
Describes a fully-automated system for building fully-patched windows gold images on-demand or on a schedule. It is possible to build and use the same image on OpenNebula, as well as public clouds such as Azure and AWS (with appropriate changes to handle the different contextualization methods in the target clouds). Uses Microsoft MDT to build the image, and Jenkins build server to manage the orchestration in OpenNebula, monitoring the build process, and publishing the completed images to the OpenNebula marketplace and the public clouds.
YouTube: https://youtu.be/owUpj8WHMQo
An on-going presentation for the Docker workshop on how to integrate docker into Vagrant as a provider. In order to remove the requirement of having a VM, and speedup development environments. It also features Puppet as the configuration management system.
The code can be found in: https://github.com/npoggi/vagrant-docker
KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Trabajo de fin de Ciclo Formativo Grado Superior en Administración de Sistemas en red (ASIR/ASIX).
El trabajo consiste en un proyecto de virtualizacion de servidores para dar una alta disponibilidad (HA) mediante el sistema Proxmox. El servicio a dar en cuestión finalmente fue de un servidor proxy y web, por falta de tiempo y problemas con la configuración de Zentyal, fue imposible su instalación.
XPDSS19: Live-Updating Xen - Amit Shah & David Woodhouse, AmazonThe Linux Foundation
Xen currently has two major mechanisms to maintain security while hosting untrusted VMs without causing disruption to those guests: live patching, and live migration. We introduce a third method: live updating Xen. A live-update operation involves loading of the newly-staged hypervisor into RAM, the currently-running Xen serializing its state, and then transferring control to the newly-staged Xen, all without disrupting running instances, beyond a little downtime when neither hypervisor is running guest vCPUs.
We present a proposal on the design of such a feature, and invite comments and feedback.
Presentation held during SFScon15 - Free Software Conference, 13.11.2015 @ TIS innovation park, Bolzano
--
Proxmox VE is a complete open source server virtualization management solution based on Debian. It supports both Linux containers (LXC) and KVM virtual machines and makes them available under an integrated web-based management GUI. With Proxmox VE companies can manage virtual machines, storage (such as Ceph, ZFS, NFS, GlusterFS, and iSCSI), virtualized networks, and highly available clusters. This talk will give attendees an overview of the new features of Proxmox VE 4.0 focusing on the new Proxmox VE HA Manager and the container technology Linux Containers (LXC). It will present how highly available virtual machines can be managed in a multi-node cluster. The talk will also share some insights on how it is to be a developer in open source projects and how to get into it as a student.
This webcast will show you how to properly configure and deploy Memcached and Solr on Windows, including all the required Drupal integration. The webcast includes also instructions on proper configuration of your Drupal cron tasks for Solr indexing in conjunction with Windows Task Scheduler.
In this article we will be showing you the complete deployment steps of Clojure Web Application on Ubuntu 14.04.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
This HowTo is about how to install the openQRM Datacenter Management and Cloud Computing platform version 5.1 on Debian 7 aka Wheezy. It is the starting point for a set of openQRM HowTos explaining different Use-cases with the focus on virtualization, automation and cloud computing.
KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Trabajo de fin de Ciclo Formativo Grado Superior en Administración de Sistemas en red (ASIR/ASIX).
El trabajo consiste en un proyecto de virtualizacion de servidores para dar una alta disponibilidad (HA) mediante el sistema Proxmox. El servicio a dar en cuestión finalmente fue de un servidor proxy y web, por falta de tiempo y problemas con la configuración de Zentyal, fue imposible su instalación.
XPDSS19: Live-Updating Xen - Amit Shah & David Woodhouse, AmazonThe Linux Foundation
Xen currently has two major mechanisms to maintain security while hosting untrusted VMs without causing disruption to those guests: live patching, and live migration. We introduce a third method: live updating Xen. A live-update operation involves loading of the newly-staged hypervisor into RAM, the currently-running Xen serializing its state, and then transferring control to the newly-staged Xen, all without disrupting running instances, beyond a little downtime when neither hypervisor is running guest vCPUs.
We present a proposal on the design of such a feature, and invite comments and feedback.
Presentation held during SFScon15 - Free Software Conference, 13.11.2015 @ TIS innovation park, Bolzano
--
Proxmox VE is a complete open source server virtualization management solution based on Debian. It supports both Linux containers (LXC) and KVM virtual machines and makes them available under an integrated web-based management GUI. With Proxmox VE companies can manage virtual machines, storage (such as Ceph, ZFS, NFS, GlusterFS, and iSCSI), virtualized networks, and highly available clusters. This talk will give attendees an overview of the new features of Proxmox VE 4.0 focusing on the new Proxmox VE HA Manager and the container technology Linux Containers (LXC). It will present how highly available virtual machines can be managed in a multi-node cluster. The talk will also share some insights on how it is to be a developer in open source projects and how to get into it as a student.
This webcast will show you how to properly configure and deploy Memcached and Solr on Windows, including all the required Drupal integration. The webcast includes also instructions on proper configuration of your Drupal cron tasks for Solr indexing in conjunction with Windows Task Scheduler.
In this article we will be showing you the complete deployment steps of Clojure Web Application on Ubuntu 14.04.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
This HowTo is about how to install the openQRM Datacenter Management and Cloud Computing platform version 5.1 on Debian 7 aka Wheezy. It is the starting point for a set of openQRM HowTos explaining different Use-cases with the focus on virtualization, automation and cloud computing.
sfdx continuous Integration with Jenkins on aws (Part I)Jérémy Vial
Sfdx is now an essential tool to set up in salesforce projects. It is used to ease the development of LWC and also to facilitate the continuous delivery of the code and its versioning.
With the experience gained on my latest projects in SFDX release management, I made a small guide for setting up a simple continuous delivery system in the frame of an sfdx project.
Apache Flink Crash Course by Slim Baltagi and Srini PalthepuSlim Baltagi
In this hands-on Apache Flink presentation, you will learn in a step-by-step tutorial style about:
• How to setup and configure your Apache Flink environment: Local/VM image (on a single machine), cluster (standalone), YARN, cloud (Google Compute Engine, Amazon EMR, ... )?
• How to get familiar with Flink tools (Command-Line Interface, Web Client, JobManager Web Interface, Interactive Scala Shell, Zeppelin notebook)?
• How to run some Apache Flink example programs?
• How to get familiar with Flink's APIs and libraries?
• How to write your Apache Flink code in the IDE (IntelliJ IDEA or Eclipse)?
• How to test and debug your Apache Flink code?
• How to deploy your Apache Flink code in local, in a cluster or in the cloud?
• How to tune your Apache Flink application (CPU, Memory, I/O)?
Scaling drupal horizontally and in cloudVladimir Ilic
Vancouver Drupal group presentation for April 25, 2013.
How to deploy Drupal on
- multiple web servers,
- multiple web and database servers, and
- how to join all that together and make site deployed on Amazon Cloud (Virtual Private Cloud) inside
- one availability zone
- multiple availability zones deployment.
Session cover details about what you need in order to get Drupal deployed on separate servers, what are issues/concerns, and how to solve them.
Access to Memory (AtoM) is an open source web application for standards-based archival description and access - learn more at:
https://www.accesstomemory.org
To provide users with an easy to install local environment for testing and development, Artefactual maintains a version of AtoM that can be installed on a laptop or home computer, regardless of operating system. We have slides that will explain what Vagrant is and how to install the AtoM Vagrant box here:
http://bit.ly/AtoM-Vagrant
These slides will help users create a re-usable set of data for use in a local AtoM Vagrant environment. Having a set of data that can easily be reloaded will make the AtoM Vagrant box more useful to local testers and developers.
The slides were originally created by Dan Gillean, AtoM Program Manager, for use in a series of training workshops delivered July 9-13, 2018 at the University of the Witswatersrand in Johannesburg, South Africa. The slides are based on current functionality in AtoM release 2.4 - they have been tested in the AtoM 2.4.0.2 and 2.5.0.0 Vagrant boxes.
Shopping for Vulnerabilities - How Cloud Service Provider Marketplaces can He...Tenchi Security
Slides of the talk presented at DEF CON Cloud Village on August 12th, 2022 by Alexandre Sieira. Contains research content from Glaysson Tomaz and Marcelo Lima as well.
Recently the Conti ransomware group internal chat leaks was fascinating reading. Among other things, it reminded us that both well-intentioned and malicious actors are constantly trying to find ways to find vulnerabilities and develop exploits to widely used IT products. This is particularly true those that are externally exposed firewalls, VPNs and load balancers, or security products that might thwart their techniques and tools.
The timeline from the chats seems to show a gap of several months between Conti members trying to procure either appliances or commercial software that they were trying to get for these purposes. This got us thinking about how the major cloud service providers these days have marketplaces where you can easily buy virtual appliances or SaaS licenses for lots of widely used IT and security products with little more than a valid credit card, in minutes. And we decided to check how feasible it is to use this to conduct vulnerability research.
In this presentation we will show what kind of access one can get to the internals of IT and security products using these marketplaces, particularly in the case of products only typically offered in hardware appliances. Which cloud providers try to prevent this sort of activity, how they do it, which ones simply don't care, and what techniques we were able to use to access these appliance's internals.
The objective here is threefold: 1) help well intentioned vulnerability researchers find an easier avenue to do their work; 2) allow cloud providers to get a better understanding of how their marketplaces can be abused and which controls they could implement to mitigate that risk, and 3) let IT and security vendors realize the added exposure of publishing their products on these marketplaces.
Shopping for Vulnerabilities - How Cloud Service Provider Marketplaces can He...Alexandre Sieira
Slides of the talk presented at DEF CON Cloud Village on August 12th, 2022 by Alexandre Sieira. Contains research content from Glaysson Tomaz and Marcelo Lima as well.
Recently the Conti ransomware group internal chat leaks was fascinating reading. Among other things, it reminded us that both well-intentioned and malicious actors are constantly trying to find ways to find vulnerabilities and develop exploits to widely used IT products. This is particularly true those that are externally exposed firewalls, VPNs and load balancers, or security products that might thwart their techniques and tools.
The timeline from the chats seems to show a gap of several months between Conti members trying to procure either appliances or commercial software that they were trying to get for these purposes. This got us thinking about how the major cloud service providers these days have marketplaces where you can easily buy virtual appliances or SaaS licenses for lots of widely used IT and security products with little more than a valid credit card, in minutes. And we decided to check how feasible it is to use this to conduct vulnerability research.
In this presentation we will show what kind of access one can get to the internals of IT and security products using these marketplaces, particularly in the case of products only typically offered in hardware appliances. Which cloud providers try to prevent this sort of activity, how they do it, which ones simply don't care, and what techniques we were able to use to access these appliance's internals.
The objective here is threefold: 1) help well intentioned vulnerability researchers find an easier avenue to do their work; 2) allow cloud providers to get a better understanding of how their marketplaces can be abused and which controls they could implement to mitigate that risk, and 3) let IT and security vendors realize the added exposure of publishing their products on these marketplaces.
Journey to Microservice architecture via Amazon LambdaAxilis
Microservices are one of the latest trends in architecture design.
Made popular by the introduction of Amazon Lambda, Google Cloud Functions and Azure Functions. They seem to offer a way to structure code as a set of independent services that interact together to work as one, making each part simpler and offering an easy way to scale up. But just as every other technology they bring their own set of challenges.
Join us on lessons we learned while converting simple application to work on Lambda.
Capistrano is an open source tool for running scripts on multiple servers. It’s primary use is for easily deploying applications. While it was built specifically for deploying Rails apps, it’s pretty simple to customize it to deploy other types of applications.
capifony is a deployment recipes collection that works with both symfony and Symfony2 applications.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Automated Amazon EC2 Cloud deployments with openQRM
1. Automated Amazon EC2 Cloud deployments with openQRM
5.1 on Debian Wheezy
This HowTo is about how to manage Public- and Hybrid Cloud deployments with openQRM. As the deployment manager for Amazon EC2 and its API
compatible derivatives (e.g Eucalyptus) openQRM is capable to fully automate Instance provisioning and to add additional value by attaching automated
application deployment via Puppet, automated monitoring via Nagios and also highavailability on Infrastructure-Level to the providers cloud features.
The whole workflow of Instance-deployment in openQRM is exactly the same as for local resources in the internal IT-environment.
Requirements
One physical Server. Alternatively the installation can be also done within a Virtual Machine
at least 1 GB of Memory
at least 100 GB of Diskspace
optional VT (Virtualization Technology) enabled in the Systems BIOS so that the openQRM Server can run KVM Virtual Machines later
Install openQRM 5.1 on Debian Wheezy
Install a minimal Debian Wheezy on a physical Server
Install and initialize openQRM 5.1
A detailed Howto about the above initial starting point is available at "Install openQRM 5.1 on Debian Wheezy (resources/documentation-
howtos/howtos/install-openqrm-51-on-debian-wheezy.html)"
For this howto we have used the same openQRM server as for the howto about 'Virtualization with KVM and openQRM 5.1 on Debian Wheezy'.
That means with this howto we are going to add functionality to an existing openQRM setup. This is to show that openQRM manages all different
virtualization and deployment types seamlessly.
Actually this means you can use either use the "Install openQRM 5.1 on Debian Wheezy (resources/documentation-howtos/howtos/install-openqrm-
51-on-debian-wheezy.html)" or "Virtualization with KVM and openQRM 5.1 on Debian Wheezy (resources/documentation-howtos/howtos/virtualization-
with-kvm-and-openqrm-51-on-debian-wheezy.html)" howto as starting point.
Set a custom Domain name
As the first step after the openQRM installation and initialization it is recommended to configure a custom domain name for the openQRM
management network.
In this Use-Case the openQRM Server has the private Class C IP address 192.168.178.5/255.255.255.0 based on the previous "Howto install
openQRM 5.1 on Debian Wheezy (resources/documentation-howtos/howtos/install-openqrm-51-on-debian-wheezy.html)". Since the openQRM
management network is a private one any syntactically correct domain name can be used e.g. 'my123cloud.net'.
The default domain name pre-configured in the DNS plugin is "oqnet.org".
Best practice is to use the 'openqrm' commandline util to setup the domain name for the DNS plugin. Please login to the openQRM Server system
and run the following command as 'root' in a terminal:
/usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v my123cloud.net
The output of the above command will look like
2. root@debian:~# /usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v my123cloud.net
Setting up default Boot-Service Konfiguration of plugin dns
root@debian:~#
To (re)view the current configuration of the DNS plugin please run:
/usr/share/openqrm/bin/openqrm boot-service view -n dns -a default
(/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/01-openqrm-ec2.png)
Enabling Plugins
For this HowTo please enable and start the following plugins in the sequence below:
dns plugin - type Networking
dhcpd plugin - type Networking
tftpd plugin - type Networking
device-manager plugin - type Management
nfs-storage - type Storage
lvm-storage - type Storage
nagios3 - type Monitoring
puppet - type Deployment
sshterm plugin - type Management
hybrid-cloud - type Deployment
Hint: You can use the filter in the plugin list to find plugins by their type easily!
Install the latest Amazon EC2 Tools
Go to Plugins -> Deployment -> Hybrid-Cloud -> About
3. (/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/02-openqrm-ec2.png)
There you can find the URLs and informations about the latest Amazon EC2 API- and AMI-Tools.
Here the steps to install the Amazon EC2 Tools. Please SSH-login to the openQRM server as 'root' and run the following commands:
wget s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
wget s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
unzip ec2-api-tools.zip
unzip ec2-ami-tools.zip
mkdir /usr/local/ec2
cp -r ec2-ami-tools-1.4.0.9/* /usr/local/ec2/
cp -r ec2-api-tools-1.6.8.1/* /usr/local/ec2/
apt-get update && apt-get install default-jdk
Please notice: The version numbers may be different when newer EC2 Tools gets available!
Then please add the following to the system wide profile /etc/profile
# EC2 Tools
export EC2_HOME=/usr/local/ec2
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/usr
The EC2 API- and AMI Tools are now installed and available in the system path.
Please notice: Now please logout of the openQRM server and re-login. This is to activate the new profile settings in the environment. After re-login
please restart the openQRM server to also activate the profile in its environment by running:
/etc/init.d/openqrm restart
To re-check the configuration please run:
ec2-describe-regions -O [your-aws-access-key] -W [your-aws-secret-key]
The output of the above command looks like:
root@debian:~# ec2-describe-regions -O XXXXXXXXXXXXXXXXXXXXX -W YYYYYYYYYYYYYYYYYYYYYYYY
REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
4. REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com
root@debian:~#
Configure which Amazon EC2 regions to use
Best practice is to use the 'openqrm' commandline util to setup which Amazon regions to use for the hybrid-cloud plugin. Please login to the
openQRM Server system and run the following command as 'root' in a terminal:
/usr/share/openqrm/bin/openqrm boot-service configure -n hybrid-cloud -a default -k OPENQRM_PLUGIN_HYBRID_CLOUD_REGIONS -v "eu-west-1,
us-west-1"
To (re)view the current configuration of the Hybrid-Cloud plugin please run:
/usr/share/openqrm/bin/openqrm boot-service view -n hybrid-cloud -a default
(/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/03-openqrm-ec2.png)
Create a Hybrid-Cloud Account
Go to Plugins -> Deployment -> Hybrid-Cloud -> Actions and click on 'Add new Account'
Provide an account name and the AWS Access and Secret Key plus a description for the account. Then click on submit.
8. (/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/09-openqrm-ec2.png)
(/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/10-openqrm-ec2.png)
Create a custom auto-configuration script to the EC2 Instance on S3
The integration with Amazon EC2 in openQRM allows to attach a custom script to a starting Instance. The Instance is then running this script on
system startup. This can be used in combination with the Puppet integration to fully pre-configure an Instance in EC2. The easiest way to create such
a custom auto-configuration script is to use the S3 action in the account overview. This provides you with a File-Manager for S3 and allows to easily
upload files to S3. Those files, if set to 'public-read' permission is directly available via http. As an example we create a small bash-script which actually
just outputs some text to a file.
On your Desktop create a new file named 'my-custom-auto-configure.sh' with the following content:
#!/bin/bash
echo "Here custom commands are running on instance startup" > /tmp/my-custom-auto-configure.log
Now go to Plugins -> Deployment -> Hybrid-Cloud -> Actions -> S3 and create a new S3 bucket.
Click on 'Files in bucket' to list the files in the bucket.
26. (/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/46-openqrm-ec2.png)
(/fileadmin/Images/Documentation/Automated_Amazon_EC2_Cloud_deployments_with_openQRM_5.1_on_Debian_Wheezy/47-openqrm-ec2.png)
Also please re-check /tmp/my-custom-auto-configure.log on the Instance to see you custom script got executed.
And here the Datacenter Dashboard after we have created the Amazon EC2 Instance
You can now fully automate your Amazon EC2 deployment with openQRM 5.1.
Hope you enjoyed this Howto!
Add more functionalities to your openQRM Setup
To continue and further enhance your openQRM KVM Virtualization Setup there are several things to do:
Enable the highavailability plugin to automatically gain HA for your server
Enable the cloud plugin for a complete Self-Service deployment of your Server and Software stack to end-users
Enable further Virtualization plugins and integrate remote Virtulization hosts for a fully distributed Cloud environment
Enable further Storage and Deployment plugins to automatically provision your Virtualization Hosts and other physical systems
... and more.
Links
27. openQRM Community: http://www.openqrm.com/ (http://www.openqrm.com/)
openQRM Project at sourceforge: http://sourceforge.net/projects/openqrm/ (http://sourceforge.net/projects/openqrm/)
openQRM Enterprise: http://www.openqrm-enterprise.com/ (http://www.openqrm-enterprise.com/)
openQRM at Twitter: https://twitter.com/openQRM (https://twitter.com/openQRM)
openQRM at Facebook: https://www.facebook.com/pages/openQRM-Enterprise/324904179687
(https://www.facebook.com/pages/openQRM-Enterprise/324904179687)
Amazon EC2: http://aws.amazon.com/ec2/http://linuxcoe.sourceforge.net/
(http://aws.amazon.com/ec2/http://linuxcoe.sourceforge.net/)