The goal of this HowTo is to have a single system setup with the Ubuntu Enterprise Cloud (UEC) and the openQRM Cloud. This system will allow to migrate services from the UEC and Amazon EC2 to the openQRM Cloud and from openQRM Cloud to UEC and Amazon EC2.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Computers are connected in a network to exchange information or resources with each other. Two or more computer are connected through network media called computer media.
There are a number of network devices or media that are involved to form computer network.
Computer loaded with Linux Operation System can also be a part of network whether it is a small or large network by multitasking and multi user natures.
Maintaining of system and network up and running is a task of System / Network Administrator’s job. In this article we are going to review frequently used network configuration and troubleshoot commands in Linux.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Computers are connected in a network to exchange information or resources with each other. Two or more computer are connected through network media called computer media.
There are a number of network devices or media that are involved to form computer network.
Computer loaded with Linux Operation System can also be a part of network whether it is a small or large network by multitasking and multi user natures.
Maintaining of system and network up and running is a task of System / Network Administrator’s job. In this article we are going to review frequently used network configuration and troubleshoot commands in Linux.
It is an overview about the Linux operating system and more beneficial to the students of BSCIT and BSCCS and other computerr related courses. It will provide you all the main points of about Linux in short and sweet language.
The Xen Project supports some of the biggest clouds in production today and is moving into new industries, like security and automotive. Usually, you will use Xen indirectly as part of a commercial product, a distro, a hosting or cloud service and only indirectly use Xen. By following this session you will learn how Xen and virtualization work under the hood exploring high-level topics like architecture concepts related to virtualization to more technical attributes of the hypervisor like memory management (ballooning), virtual CPUs, scheduling, pinning, saving/restoring and migrating VMs.
This presentation features a walk through the Linux kernel networking stack covering the essentials and recent developments a developer needs to know. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as segmentation offloading, TCP small queues, and low latency polling. We will cover APIs exposed by the kernel that go beyond use of write()/read() on sockets and will look into how they are implemented on the kernel side.
Software Defined networking - An overview
OpenStack Neutron Overview
OpenVswitch - Overview
Neutron-VXLAN-GRE-OVS : behind the scenes
neutron Packet flow to external network
neutron Packet flow from VM to VM
This talk will provide several examples of how Facebook engineers use BPF to scale the networking, prevent denial of service, secure containers, analyze performance. It’s suitable for BPF newbies and experts.
Alexei Starovoitov, Facebook
A new poll by Ipsos MORI shows that Labour are seen as having the best policies on Europe, according to the British public. However, the number who say Labour have the best policies on Europe is smaller than a combination of those saying the Conservatives or UKIP.
Ipsos MediaCT: Business Elite Breakfast SeminarIpsos UK
July 2011: View James Torr’s presentation at our recent Breakfast Seminar focused on the Business Elite, looking at the evolution of this hard to reach group since the first BE:Europe survey in 1973.
It is an overview about the Linux operating system and more beneficial to the students of BSCIT and BSCCS and other computerr related courses. It will provide you all the main points of about Linux in short and sweet language.
The Xen Project supports some of the biggest clouds in production today and is moving into new industries, like security and automotive. Usually, you will use Xen indirectly as part of a commercial product, a distro, a hosting or cloud service and only indirectly use Xen. By following this session you will learn how Xen and virtualization work under the hood exploring high-level topics like architecture concepts related to virtualization to more technical attributes of the hypervisor like memory management (ballooning), virtual CPUs, scheduling, pinning, saving/restoring and migrating VMs.
This presentation features a walk through the Linux kernel networking stack covering the essentials and recent developments a developer needs to know. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as segmentation offloading, TCP small queues, and low latency polling. We will cover APIs exposed by the kernel that go beyond use of write()/read() on sockets and will look into how they are implemented on the kernel side.
Software Defined networking - An overview
OpenStack Neutron Overview
OpenVswitch - Overview
Neutron-VXLAN-GRE-OVS : behind the scenes
neutron Packet flow to external network
neutron Packet flow from VM to VM
This talk will provide several examples of how Facebook engineers use BPF to scale the networking, prevent denial of service, secure containers, analyze performance. It’s suitable for BPF newbies and experts.
Alexei Starovoitov, Facebook
A new poll by Ipsos MORI shows that Labour are seen as having the best policies on Europe, according to the British public. However, the number who say Labour have the best policies on Europe is smaller than a combination of those saying the Conservatives or UKIP.
Ipsos MediaCT: Business Elite Breakfast SeminarIpsos UK
July 2011: View James Torr’s presentation at our recent Breakfast Seminar focused on the Business Elite, looking at the evolution of this hard to reach group since the first BE:Europe survey in 1973.
Behaviour Change: What role do we want governments to play? An international ...Ipsos UK
How governments try to shape the behaviours of their citizens is something that has been much discussed over the past few years. Key to the debate is the extent to which people think it is acceptable for their government to intervene in their choices and what this intervention should look like. This presentation was delivered at our March 2012 event "Acceptable Behaviour? Public opinion on behaviour change policy".
This HowTo is about how to install the openQRM Datacenter Management and Cloud Computing platform version 5.1 on Debian 7 aka Wheezy. It is the starting point for a set of openQRM HowTos explaining different Use-cases with the focus on virtualization, automation and cloud computing.
This HowTo is about how to create and manage Xen Virtual Machines on Debian 7 aka Wheezy with openQRM 5.1. This is the first howto which requires 2 systems and it shows how to integrate additional, existing, local-installed server into openQRM by the example of adding a Xen Host into an existing openQRM environment.
This HowTo is about how to manage Public- and Hybrid Cloud deployments with openQRM. As the deployment manager for Amazon EC2 and its API compatible derivatives (e.g Eucalyptus) openQRM is capable to fully automate Instance provisioning and to add additional value by attaching automated application deployment via Puppet, automated monitoring via Nagios and also highavailability on Infrastructure-Level to the providers cloud features. The whole workflow of Instance-deployment in openQRM is exactly the same as for local resources in the internal IT-environment.
A prism changes your way of looking at the world: how PRISM affects your clou...openQRM Enterprise GmbH
It was considered to be an open secret and there have always been indicators that attested to it but now it is an established fact. The PRISM surveillance affair has caused a worldwide furore. Official agencies went so far as to say that it is something that must be accepted because it ultimately affects the national security of the United States. Against this background, many companies are asking themselves how the confidentiality of their business-critical data can be ensured with reference to their cloud computing strategies. Companies find themselves in a strongly dependent relationship with technology and communications solutions over which the cloud already exerts a considerable influence.
In this tutorial you will learn the basics of an OpenNebula deployment. To follow this tutorial you will need this OVA: https://s3.amazonaws.com/one-tutorials/OpenNebula-Tutorial-4.14.2.ova
Devnet 1005 Getting Started with OpenStackCisco DevNet
Install OpenStack within a VM on your own laptop. Acquaint yourself with the development environment. Learn your way around Horizon (GUI) and the CLI to view and operate an OpenStack cloud. Activate and operate integrations to Cisco network elements
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Kernel Based virtual machine setup easily on Linux operation systems. It is basically done on Ubuntu operating system.
Anyone with small knowledge of operating Ubuntu can set up it.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
openQRM how-to: Setup UEC and openQRM cloud
1. The perfect server: openQRM, UEC and EC2 on Ubuntu 10.10
This documentation is brought to you by openQRM Enterprise
http://www.openqrm-enterprise.com
Document Version : 27.10.2010
openQRM Enterprise GmbH
Berrenrather Straße 188c
50937 Köln / Germany
Telefon : +49 (0) 221 995589-10
Fax : +49 (0) 221 995589-20
Mail : info@openqrm-enterprise.com
Table of Contents
The perfect server: openQRM, UEC and EC2 on Ubuntu 10.10 ..............................................................1
Hybrid Cloud Computing with openQRM.................................................................................................2
Requirements.........................................................................................................................................2
Install the system with Ubuntu 10.10 (64bit) as a "Node Controller" ..................................................2
Install openQRM from the source repository........................................................................................3
Install the Ubuntu Enterprise Cloud Controller in a KVM VM............................................................8
Configure the Ubuntu Enterprise Cloud .............................................................................................43
Import an AMI from the Ubuntu Enterprise Cloud (or Amazon EC2) ..............................................50
Run the imported AMI on a local KVM VM .....................................................................................64
Export an openQRM Image to Amazon EC2 .....................................................................................76
Thanks.................................................................................................................................................83
Urls......................................................................................................................................................83
2. Hybrid Cloud Computing with openQRM
The goal of this HowTo is to have a single system setup with the Ubuntu Enterprise Cloud
(UEC) and the openQRM Cloud. This system will allow to migrate services from the UEC and
Amazon EC2 to the openQRM Cloud and from openQRM Cloud to UEC and Amazon EC2.
Requirements
- 1 64bit system having the VT (Virtualization Technology) CPU extension
- 2 GB RAM (or more)
- 200 GB disk space (or more)
Install the system with Ubuntu 10.10 (64bit) as a "Node Controller"
• Boot the system from the Ubuntu CD
• Select "Install Ubuntu Enterprise Cloud"
• Set a static ip-address
◦ in this Howto it will be 192.168.88.100, hostname will be "perf"
• Continue with an empty (blank) Cloud Controller address
• During the Cloud package selection just select "Node Controller"
◦ deselect everything else
• In the partitioning screen
◦ Create a partiton (primary) for /
▪ in this Howto it will be /dev/sda1
◦ Create a partition (primary) for swap
▪ in this Howto it will be /dev/sda2
◦ Create a partition (primary) to be used by lvm
▪ this should be huge, in this Howto it will be /dev/sda3
After the installation finished reboot into the fresh Ubuntu 10.10 system.
Now create a iso image from the Ubuntu 10.10 CD. Insert the CD, open a terminal and run :
sudo bash
mkdir /isos
dd if=/dev/cdrom of=/isos/ubuntu-10.10.iso
3. Install openQRM from the source repository
Installing openQRM from the projects subversion repository will include the latest features.
sudo bash
apt-get install subversion nfs-kernel-server
cd
mkdir openqrm
cd openqrm
svn co https://openqrm.svn.sourceforge.net/svnroot/openqrm openqrm
cd trunk/openqrm/src
make && make install && make start
Then point your browser to http://192.168.88.100/openqrm to finalize the openQRM Server
configuration. The default username and password after a fresh installation is "openqrm" (for
user + pass). Please make sure you update this default password after first login !
In the first setup screen please select "br0" as the openQRM interface.
7. Now we are going to enable the required openQRM plugins for this HowTo.
• Enable and start the following plugins :
◦ kvm, kvm-storage, lvm-storage, hybrid-cloud, tftpd
• Just enable the "dhcpd" plugin. Do not start it (yet) !
8. We are mpw going to install the Ubuntu Enterprise Cloud Controller within a virtual machine
managed by openQRM. The virtual-machine type for the UEC CC will be "kvm-storage".
Install the Ubuntu Enterprise Cloud Controller in a KVM VM
First we need to prepare the partition dedicated as our image store (/dev/sda3).
Open a terminal and run the following commands :
sudo bash
pvcreate /dev/sda3
vgcreate lvols /dev/sda3
9. Now we are going to prepare the volume to install the UEC CC on.
• Goto Base -> Components -> Storage -> New Storage
10. • Create a new kvm-storage server (type KVM LVM Storage) using the openQRM
system as the resource
11. • Provide a name for the kvm-storage server, here we will use "kvmstorageserver"
12. • Goto Base -> Components -> Storage and click on "Mgmt" of the kvm-storage
◦ Select the "lvols" volume group
13. ◦ Create a new volume for the UEC "Cloud Controller" on the kvm-storage server.
◦ Name it "ubuntucc" and give it at least 40GB volume size.
15. • Goto Base -> Components -> Image -> New Image
◦ Select the kvm-storage server "kvmstorageserver"
16. ◦ Povide a name for the "image" object, here we will use "ubuntucc"
17. Here the Image list after we have created the Image object.
Now we are going to prepare the VM resource for the UEC CC.
18. Create the kvm-storage Hosts
• Goto Base -> Appliance -> Create and create a new kvm-storage Host appliance
using the openQRM system as the resource
19. ◦ Provide a name for the appliance, here we will use "kvmstoragehost"
◦ Select "KVM-Storage Host" as the resource type
20. Here the fresh created KVM-Storage Host in the Appliance overview.
21. Create a new kvm-storage VM:
• Goto Plugins -> Virtualization -> kvm-storage -> VM-Manager
◦ Select the kvm-storage Host
22. • Create a new kvm-storage VM on the Host by clicking on the + icon
23. ◦ Provide a name for the VM, here we will use "kvmstoragevm1"
◦ Set the VMs memory to at least 1024 MB
◦ Select "iso image" as the boot-medium and fill in the path to the ubuntu-10.10.iso
▪ /isos/ubuntu-10.10.iso
24. Creating the VM will reserve the VMs components and create a new, idle resource.
26. Now we will put the "image" and the "resource" we have created together via an "appliance".
• Goto Base -> Appliance -> Create
• Select the new created "idle" resource (the kvm-storage VM)
27. ◦ Provide a Name for the appliance, here we will use "ubuntucc"
◦ Leave the default kernel
◦ Select the "ubuntucc" image
◦ Select the resource type "KVM-Storage VM"
29. Here the started appliance
The idle resouce will now boot into the Ubuntu installation accessible via VNC .
Install vncviewer :
sudo bash apt-get install xtightvncviewer
30. You can access the Install screen via "vncviewer" on the openQRM Server.
vncviewer 192.168.88.100:50
Hints
• For kvm-storage VMs the first VNC id will be 50
• If you are logged in via ssh you need to have X-forwarding enabled
◦ e.g. ssh -X openqrm
• A remote VM console integrated into openQRM is available from openQRM Enterprise
◦ http://www.openqrm-enterprise.com
31. For the Ubuntu Enterprise Cloud Controller installation follow the steps below :
• Select "Install Ubuntu Enterprise Cloud"
• Manually configure the network device. Set a static ip-address
◦ in this HowTo it will be 192.168.88.101, hostname ubuntucc)
32. • Otherwise go with the defaults installation parameters
• Later in the installation setup the ip-addresses to be used by UEC
◦ In this Howto this will be 192.168.88.102-192.168.88.122
Rebot after the installation of the Ubuntu Enterprise Cloud Controller.
Please notice : The VM will now still try to boot from the CD and fail !
This is because the VM is still configured to boot from the iso image.
Please see the next part how to re-configure the VM to boot from local disk.
33. Before we will boot the now ready installed Ubuntu Enterprise Cloud Controller we will create
a snapshot of its disk. Then we will re-configure the "ubuntucc" appliance to use the snapshot
instead of the origing volume. This enables you to roll-back at any time in case you need a
fresh Cloud Controller.
Here how to create the snapshot :
• Goto Base -> Appliance and stop the ubuntucc appliance
34. • Goto Base -> Components -> Storage and click on "Mgmt" of the kvm-storage
◦ Select the "lvols" volume group
◦ Fill in a snapshot name (here "ccsnapshot1" and provide a size, here 20GB).
◦ Click on "snap"
36. Now we have to create a new Image from the snapshot volume:
• Goto Base -> Components -> Image -> New Image
◦ Select the kvm-storage server "kvmstorageserver"
◦ Povide a name for the new "image" object, here we will use "ccsnapshot1" and
select the "ccsnapshot1" volume as the root-device
37. • Goto Base -> Appliance and edit the ubuntucc appliance
◦ Adjust the Root-device to the “ccsnapshot1” image and save
38. Now re-configure the VM to do a localboot :
• Goto Plugins -> Virtualization and stop the VM
39. • When the VM is stopped click on "Config"
• In the config screen edit the boot-order
40. • Set the boot-order to "local boot"
• Click on back to go to the Virtual Machine list
42. • Goto Base -> Appliance and start the ubuntucc appliance again
You can now access the VM again via "vncviewer" on the openQRM Server.
vncviewer 192.168.88.100:50
to check its boot-up from local disk.
43. Configure the Ubuntu Enterprise Cloud
You can now access the UEC Configuration Panel at https://192.168.88.101:8443
The first time when connecting it will tell in the browser "This Connection is Untrusted".
Add an exception for it and it will forward you to the login panel.
The default login for the UEC is user "admin" with the password "admin".
44. Now it is a good time to download your UEC Cloud credentials.
45. Here how to install the credentials on your openQRM Server. Please open a terminal and
run :
mkdir .euca
mv euca2-admin-x509.zip .euca/
cd .euca/
unzip euca2-admin-x509.zip
. eucarc
46. After that you are able to use the UEC commandline tools e.g. euca-describe-availability-
zones
Next step is to set the password for user "eucalyptus" on the openQRM server which is also a
UEC Node Controller.
sudo bash
passwd eucalyptus
We need to have this in the following step on the UEC system itself to discover the Node
Controller.
Login to the UCE Cloud Controller (the VM at 192.168.88.101) and run :
sudo euca_conf --discover-nodes
This will automatically rsync the ssh-keys and add the Node Controller to the UEC cluster.
47. Now we go back to the UEC Admin UI and download one of the pre-made UEC Images.
For this HowTo we selected the Ubuntu Karmic 10.04 64bit. Clicking on Download will
download and install the image in the UEC Cloud.
Before we actually start an instance of this AMI we need to create a ssh keypair an open port
22 on the UEC firewall to enable ssh login.
On the openQRM Server open a terminal and run :
. .euca/eucarc
euca-add-keypair mykey > ~/.euca/mykey.priv
euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
48. Starting an instance on the UEC Cloud
Now we are going to start an instance on the Ubuntu Enterprise Cloud via the euca-run-
instances command.
Open a terminal on the openQRM server and follow the steps below :
. .euca/eucarc
euca-run-instances -k mykey emi-DEBF106A -t m1.small
Please notice that you need to get the AMI name (here emi-DEBF106A) from the image
overview in UEC.
After a short while the instances is running and we can login via ssh.
ssh -i .euca/mykey.priv ubuntu@192.168.88.102
49. To prepare the Import of this AMI into openQRM we now need to adjust the
/root/.ssh/authorized_keys file on the AMI.
Simply cat the /home/ubuntu/.ssh/authorized_keys to /root/.ssh/authorized_keys to enable
passwordless ssh login for the root user too.
50. Import an AMI from the Ubuntu Enterprise Cloud (or Amazon EC2)
First step to import from the Ubuntu Enterprise Cloud is to define the UEC credentials in
openQRM.
• Goto Plugins -> Components -> Deployment -> Hybrid-Cloud -> Accounts
◦ Provide an Account name (you can choose any name)
◦ Set the path to the UEC rc-config file
◦ Set the path to the ssh-key file
◦ Set the type of the account
51. Second step for importing the AMI of the running instance is to create a volume on storage
server in openQRM.
Create a lvm-storage (NFS) server in openQRM
• Goto Base -> Components -> Storage -> New Storage
• Create a new lvm-storage server (type LVM Storage NFS) using the openQRM
system as the resource
52. ◦ Provide a name for the lvm-storage server, here we will use "lvmstorageserver"
57. Now we have to create a new Image from the new empty volume:
• Goto Base -> Components -> Image -> New Image
◦ Select the lvm-storage server "lvmstorageserver"
58. • Povide a name for the new "image" object, here we will use "uecubuntuimport" and
select the "uecubuntuimport" volume as the root-device
Everything is prepared for the import.
59. Import the AMI
• Goto Plugins -> Deployment -> Hybrid-Cloud -> Import
◦ Select your UEC account
60. ◦ On the next screen select the running instance on the UEC
61. ◦ On the next screen select the Image to transfer the AMI to and click on "put"
62.
63. openQRM will now import the AMI. You can check the Event list for the progress.
64. Run the imported AMI on a local KVM VM
For running the imported AMI please now start the dhcpd plugin
Please notice that you make sure to only have one dhcp-server running in your setup!
Eiher have openQRM serving dhcp or the UEC Cloud Controller.
65. Now create a kvm Hosts:
• Goto Base -> Appliance -> Create and create a new kvm Host appliance using the
openQRM system as the resource
66. ◦ Provide a name for the appliance, here we will use "kvmhost"
◦ Select "KVM Host" as the resource type
67. Create a new KVM VM:
• Goto Plugins -> Virtualization -> kvm -> VM-Manager, select the kvm Host
68. • Create a new kvm VM on the Host by clicking on the + icon
69. ◦ Provide a name for the VM, here we will use "kvmvm1"
70. Creating the VM will reserve the VMs components and create a new, idle resource.
71. Now we will put the "image" (the imported AMI) and the "resource" we have created (the new
KVM VM) together via an "appliance".
• Goto Base -> Appliance -> Create
◦ Select the new created "idle" resource (the idle kvm VM)
72. ◦ Provide a Name for the appliance, here we will use "uecubuntuimport"
◦ Leave the default kernel
◦ Select the "uecubuntuimport" image
◦ Select the resource type "KVM VM"
73. • Save and start the appliance
The idle resouce will now reboot and start the "uecubuntuimport" image. The VM is now
accessible via VNC
vncviewer 192.168.88.100:1
Hints
- For kvm VMs the first VNC id will be 1
75. You can ssh to the running appliance in the same way as we did for the AMI.
ssh -i .euca/mykey.priv root@192.168.88.253
Please get the ip of the appliance in the openQRM resource overview.
The imported AMI, now available in openQRM as an “Image”, can be easily made available in
the openQRM Cloud.
HowTo setup and use the openQRM Cloud is covered in another Howto at :
http://www.openqrm-enterprise.com/news/details/article/howto-setup-your-own-openqrm-
cloud-with-kvm-on-ubuntu-lucid-lynx.html
76. Export an openQRM Image to Amazon EC2
To export an openQRM Image to Amazon EC2 (or to an Ubuntu Enterprise Cloud) we first
have to install the Amazon ec2-ami-tools and ec2-api-tools.
Download the Amazon EC2 API Tools from http://aws.amazon.com/developertools/351
Download the Amazon EC2 AMI Tools from http://aws.amazon.com/developertools/368
Install both tools on the openQRM Server at /home/[username]/aws
cd
mkdir -p aws .ec2
cp ec2-ami-tools.zip ec2-api-tools.zip aws
cd aws
unzip ec2-ami-tools.zip
unzip ec2-api-tools.zip
Please make sure to have a java jdk installed. Also you need to install ruby and curl.
sudo apt-get install ruby curl
The ec2-tools require this.
Next step is to create a Amazon rc-config file allowing the ec2-tools to work seamlessly.
A sample ec2rc config file looks like this (of course this example contains random user data):
# for java to work ok
export JAVA_HOME=/home/matt/java/jdk1.6.0_14
# aws api tools
export EC2_HOME=/home/matt/aws/ec2-api-tools-1.3-57419
# aws ami tools
export EC2_AMITOOL_HOME=/home/matt/aws/ec2-ami-tools-1.3-57676
export EC2_PRIVATE_KEY=/home/matt/.ec2/pk-123456.pem
export EC2_CERT=/home/matt/.ec2/cert-123456.pem
# EU
export EC2_URL=https://ec2.eu-west-1.amazonaws.com
# US
#export EC2_URL=https://us-east-1.ec2.amazonaws.com
# keys
export EC2_ACCESS_KEY='123456'
export EC2_SECRET_KEY='123456'
export PATH=$JAVA_HOME/bin:$PATH:$EC2_HOME/bin:
$EC2_AMITOOL_HOME/bin:/usr/games:/home/matt/scripts
# aws user id for the cmdline tools
export EC2_USER_ID="123456"
77. Please save content as /home/[username]/.ec2/ec2rc
Also please download your AWS Private-key and your AWS Certificate to /home/
[username]/.ec2/
After that please source the ec2rc and check the functionality of the ec2-tools by running
“ec2-describe-regions”.
78. With the Amazon account credentials installed we are now setting up another Hybrid-Cloud
account.
• Goto Plugins -> Components -> Deployment -> Hybrid-Cloud -> Accounts
◦ Provide an Account name (you can choose any name)
◦ Set the path to the EC2 rc-config file
◦ Set the path to the ssh-key file
◦ Set the type of the account
79. Now we are ready to export the openQRM Image.
• Goto Plugins -> Deployment -> Hybrid-Cloud -> Import
◦ Select your EC2 account
80. ◦ On the next screen select the Image to transfer to Amazon
81. ◦ On the next screen provide a S3 bucket name for the AMI and configure
region, size and architecture.
82. Clicking on "export" will start the migration.
openQRM is now transferring the Image to the Amazon Cloud as a new AMI.
It will be available for deployment after bundling and uploading the AMI finished.
You can get a detailed log about the migration at /tmp/uecubuntuexport.export.debug.log.
tail -f /tmp/uecubuntuexport.export.debug.log
Same as for the Import you can also check the Event list for the progress.
As soon as the migration finished the exported openQRM Image will be available at the
Amazon EC2 Cloud. You can start it e.g. via the EC2 commadline tools on the openQRM
Server. Open a terminal an run :
. /home/[username]/.ec2/ec2rc
ec2-run-instances [ami-name] -k [ssh-keypair]
83. Thanks
We hope you enjoyed this Howto with the focus on the hybrid-cloud features of openQRM !
Urls
UEC Installation Howto - https://help.ubuntu.com/community/UEC/CDInstall
The openQRM Project – http://www.openqrm.com
openQRM Enterprise – http://www.openqrm-enterprise.com
openQRM Documentation - http://www.openqrm-enterprise.com/news/details/article/in-depth-documentation-of-openqrm-available.html
openQRM Cloud HowTo - http://www.openqrm-enterprise.com/news/details/article/howto-setup-your-own-openqrm-cloud-with-kvm-on-ubuntu-lucid-lynx.html
This documentation is brought to you by openQRM Enterprise
http://www.openqrm-enterprise.com
Copyright 2010, Matthias Rechenburg matt@openqrm-enterprise.com