1. The document provides an overview of hosted engine deployment and maintenance in oVirt/RHV, including details on the vintage and new node 0 deployment flows, hosted engine services, and maintenance modes.
2. It describes the roles of the ovirt-ha-agent and ovirt-ha-broker services in ensuring high availability of the engine VM, including how host scores are calculated and used to determine migration and restart actions.
3. The document discusses hosted engine configuration files, metadata storage, and technical details around deployment, internals, and troubleshooting.
This chapter provides a general introduction to finfish taxonomy. It discusses the importance of studying finfish taxonomy and defines key terms like taxonomy, taxa, classification, and systematics. It outlines the three stages of taxonomy - alpha, beta, and gamma taxonomy. Finally, it describes the principal tasks of taxonomists, which include identifying fish species, conducting taxonomic revisions, and studying evolutionary links between species. The document establishes the foundation for understanding the principles and practice of finfish taxonomy.
This chapter provides a general introduction to finfish taxonomy. It discusses the importance of studying finfish taxonomy and defines key terms like taxonomy, taxa, classification, and systematics. It outlines the three stages of taxonomy - alpha, beta, and gamma taxonomy. Finally, it describes the principal tasks of taxonomists, which include identifying fish species, conducting taxonomic revisions, and studying evolutionary links between species. The document establishes the foundation for understanding the principles and practice of finfish taxonomy.
Collection and isolation of live food organismsAMITJADHAV83080
Fish food organisms are essential for the developmental stages of many aquatic organisms. They are very important in the critical phases of finfishes and shellfishes for their better survival and growth. They are rich in proteins, carbohydrates and essential fatty acids.
主に論文 "Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions" の紹介。
https://pmg.csail.mit.edu/pubs/adya99__weak_consis-abstract.html
API Gateway or Service mesh - Complementary or excluding conceptsSven Bernhardt
API Gateway are already around for a while. With the rise of Microservices architectures and highly distributed architectures, new concepts like Service meshes arise. Since Service mesh and API Gateway implementations seem to have similar functionalities, we have to deal with questions wether to use the one or the other. But is it really an “or”? Maybe is it just another fallacy?
In this session, I’ll explain basic concepts, common functionalities and differences for both concepts, to answer the question, if it’s complementary or excluding concepts? To make this session more practical, it’ll be supported by coding examples where certain aspects of the talk are shown based on Cloud-native example app that run upon OCI.
Crocodiles are large aquatic reptiles that have existed for over 200 million years. They are classified within the order Crocodilia and there are 24 living species divided among 3 families. Crocodiles can live 35-75 years and range greatly in size depending on species, from the dwarf crocodile at under 2 meters to the saltwater crocodile that can exceed 7 meters. Crocodiles are apex predators with a highly developed sense of smell and vision that aids in hunting. They exhibit complex behaviors including parental care of hatchlings and social hierarchies within groups. Common diseases include viral, bacterial, fungal infections as well as parasites and nutritional deficiencies.
Stream and Batch Processing in the Cloud with Data Microservicesmarius_bogoevici
The future of scalable data processing is microservices! Building on the ease of development and deployment provided by Spring Boot and the cloud native capabilities of Spring Cloud, the Spring Cloud Stream and Spring Cloud Task projects provide a simple and powerful framework for creating microservices for stream and batch processing. They make it easy to develop data-processing Spring Boot applications that build upon the capabilities of Spring Integration and Spring Batch, respectively. At a higher level of abstraction, Spring Cloud Data Flow is an integrated orchestration layer that provides a highly productive experience for deploying and managing sophisticated data pipelines consisting of standalone microservices. Streams and tasks are defined using a DSL abstraction and can be managed via shell and a web UI. Furthermore, a pluggable runtime SPI allows Spring Cloud Data Flow to coordinate these applications across a variety of distributed runtime platforms such as Apache YARN, Cloud Foundry, or Apache Mesos. This session will provide an overview of these projects, including how they evolved out of Spring XD. Both streaming and batch-oriented applications will be deployed in live demos on different platforms ranging from local cluster to a remote Cloud to show the simplicity of the developer experience.
A short description about catla fish.presentation on catla fish.which scientific name is catla catla,cyprinus catla & gibelion catla.this fish is very important for aquaculture.the growth rate is high and culture system of this fish is very easy.This freshwater fish has a good market demand also.
This document discusses microservices and OSGi services running with Apache Karaf. It covers some of the operational overhead and complexity of microservices compared to using OSGi microservices (μServices) with Apache Karaf. Key points include reduced operational overhead and skills requirements, built-in support for versioning and distributed capabilities with OSGi μServices in Apache Karaf. Continuous delivery techniques like using Jolokia for deployment and Apache Karaf Cellar for clustering are also mentioned.
Okd wg kubecon marathon azure & vsphereWalid Shaari
this was part of open shift commons events, fringe to the Kubecon 2020 Europe/Amsterdam (virtual) . along with my good online friend and mentor Josef we presented our experience for installing OKD in Azure and VMware.
Troy Lea's presentation on Monitoring VMware Virtualization Using vMA.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Collection and isolation of live food organismsAMITJADHAV83080
Fish food organisms are essential for the developmental stages of many aquatic organisms. They are very important in the critical phases of finfishes and shellfishes for their better survival and growth. They are rich in proteins, carbohydrates and essential fatty acids.
主に論文 "Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions" の紹介。
https://pmg.csail.mit.edu/pubs/adya99__weak_consis-abstract.html
API Gateway or Service mesh - Complementary or excluding conceptsSven Bernhardt
API Gateway are already around for a while. With the rise of Microservices architectures and highly distributed architectures, new concepts like Service meshes arise. Since Service mesh and API Gateway implementations seem to have similar functionalities, we have to deal with questions wether to use the one or the other. But is it really an “or”? Maybe is it just another fallacy?
In this session, I’ll explain basic concepts, common functionalities and differences for both concepts, to answer the question, if it’s complementary or excluding concepts? To make this session more practical, it’ll be supported by coding examples where certain aspects of the talk are shown based on Cloud-native example app that run upon OCI.
Crocodiles are large aquatic reptiles that have existed for over 200 million years. They are classified within the order Crocodilia and there are 24 living species divided among 3 families. Crocodiles can live 35-75 years and range greatly in size depending on species, from the dwarf crocodile at under 2 meters to the saltwater crocodile that can exceed 7 meters. Crocodiles are apex predators with a highly developed sense of smell and vision that aids in hunting. They exhibit complex behaviors including parental care of hatchlings and social hierarchies within groups. Common diseases include viral, bacterial, fungal infections as well as parasites and nutritional deficiencies.
Stream and Batch Processing in the Cloud with Data Microservicesmarius_bogoevici
The future of scalable data processing is microservices! Building on the ease of development and deployment provided by Spring Boot and the cloud native capabilities of Spring Cloud, the Spring Cloud Stream and Spring Cloud Task projects provide a simple and powerful framework for creating microservices for stream and batch processing. They make it easy to develop data-processing Spring Boot applications that build upon the capabilities of Spring Integration and Spring Batch, respectively. At a higher level of abstraction, Spring Cloud Data Flow is an integrated orchestration layer that provides a highly productive experience for deploying and managing sophisticated data pipelines consisting of standalone microservices. Streams and tasks are defined using a DSL abstraction and can be managed via shell and a web UI. Furthermore, a pluggable runtime SPI allows Spring Cloud Data Flow to coordinate these applications across a variety of distributed runtime platforms such as Apache YARN, Cloud Foundry, or Apache Mesos. This session will provide an overview of these projects, including how they evolved out of Spring XD. Both streaming and batch-oriented applications will be deployed in live demos on different platforms ranging from local cluster to a remote Cloud to show the simplicity of the developer experience.
A short description about catla fish.presentation on catla fish.which scientific name is catla catla,cyprinus catla & gibelion catla.this fish is very important for aquaculture.the growth rate is high and culture system of this fish is very easy.This freshwater fish has a good market demand also.
This document discusses microservices and OSGi services running with Apache Karaf. It covers some of the operational overhead and complexity of microservices compared to using OSGi microservices (μServices) with Apache Karaf. Key points include reduced operational overhead and skills requirements, built-in support for versioning and distributed capabilities with OSGi μServices in Apache Karaf. Continuous delivery techniques like using Jolokia for deployment and Apache Karaf Cellar for clustering are also mentioned.
Okd wg kubecon marathon azure & vsphereWalid Shaari
this was part of open shift commons events, fringe to the Kubecon 2020 Europe/Amsterdam (virtual) . along with my good online friend and mentor Josef we presented our experience for installing OKD in Azure and VMware.
Troy Lea's presentation on Monitoring VMware Virtualization Using vMA.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
This document discusses the oVirt mobile client application called moVirt. It provides an overview of moVirt's architecture and capabilities, which allow mobile monitoring of virtual machines and notification of events. Future plans are outlined to improve the user interface, integrate additional features like SPICE and SSH, and make moVirt more useful for server room administrators by connecting it directly to host systems.
Best practices of notes traveler deploymentRahul Kumar
This document provides an overview and best practices for deploying and managing IBM Notes Traveler. It discusses choosing a deployment architecture, reasons for migrating to new hardware, best practices for upgrading and migrating the Traveler server, common problems, enabling HTTPS, and moving to a high availability configuration. The presenters are IBM ICS support engineers and SWAT team members providing expertise on Notes and Domino.
The document discusses new storage features in oVirt 3.5 including live merge to merge VM snapshots without shutting down the VM, snapshot overview to manage disk space across storage domains, and importing existing data domains to detach, attach, or import storage domains between data centers. It also covers using Sanlock for storage-based fencing to fence malfunctioning hosts through the storage infrastructure.
This document discusses disaster recovery for oVirt virtualization environments. It describes how oVirt has evolved to support disaster recovery through exporting and importing storage domains. However, this process was manual. oVirt 4.2 added more attributes to OVF files to support automatic registration of VMs and templates between sites. Ansible roles are now available to automate failover and failback between primary and secondary oVirt sites using mapping files. The roles generate mappings, import storage domains, register VMs and templates, and start VMs to recover the secondary site or restore the primary site.
Hyper-V Best Practices & Tips and TricksAmit Gatenyo
This document discusses best practices for configuring Hyper-V hosts and virtual machines. It recommends:
- Using Server Core installation and dedicating hosts to the Hyper-V role for improved security and reliability.
- Properly sizing host CPUs, memory, and storage and separating networks for management, storage, and VMs.
- Configuring virtual machines with fixed VHDs, proper RAM and network settings, and latest integration components.
- Implementing security practices like regular patching of VMs and limiting processor usage to prevent overcommitment.
- Using VSS-aware backups and excluding unnecessary files/folders from antivirus scans to optimize performance.
There are a lot of new trends when it comes to how you can deploy your servers. From AWS, to Docker, or perhaps you just want a faster way to stand up new VMs. The common thread among all of these comes down to automation! CommandBox is the kingpin in being able to automated every piece of a ColdFusion or Lucee server setup so you can get stable, repeatable builds every time. CommandBox isn't just for developers though. It's a rock solid server built on JBoss Undertow that can handle any production load. Come learn some tricks and features to leverage CommandBox for your production servers whether it's Docker, or a managed cluster of your own.
Varnish is configured to improve site response time. The document provides instructions on setting up Varnish cache in front of a web server. It discusses requirements like routing all traffic through a firewall and caching content for 6 hours if the origin server is down. It also covers estimating cache size, installing Varnish and plugins to monitor performance, and ensuring Varnish automatically restarts.
Itb2018 cf apps to dev to production with command box cf-config dockerOrtus Solutions, Corp
The document discusses using CommandBox, CFConfig, and Docker to automate and streamline deploying ColdFusion applications from development to production. It outlines how to configure ColdFusion engines, settings, and code in a portable way using JSON files and environment variables. This allows applications to be deployed anywhere quickly and consistently using a few commands like cloning a source code repository, running a dependency installer, and starting a server.
This document provides an overview and agenda for an open mic session on best practices, tips, and tricks for installing Sametime 9. It discusses the installation of the Sametime Video MCU, including prerequisites, hardware requirements, and validation steps. It also covers the installation of the Video Manager and troubleshooting techniques such as collecting logs.
Open mic on sametime 9 installs best practices, tips and tricksa8us
This document provides an overview and agenda for an open mic session on best practices, tips, and tricks for installing Sametime 9. It discusses the installation of the Sametime Video MCU, including prerequisites, hardware requirements, and ports. It also covers validating a Video MCU installation, uninstallation, troubleshooting, and collecting logs. The document provides detailed steps and considerations for installing, validating, uninstalling, and troubleshooting the Video MCU component of Sametime 9.
Dude, This Isn't Where I Parked My Instance?Stephen Gordon
This document discusses moving OpenStack instances between compute nodes. It begins by asking what is being moved (the guest configuration, storage, and state) and then discusses reasons for moving instances like node maintenance or capacity management. It describes different mechanisms for moving instances, including evacuate, migrate, live migration, and helpers for moving all instances on a node. It then covers new enhancements in OpenStack Liberty and Mitaka related to long running live migrations, including scaling the downtime and adding configuration options to control concurrent operations and timeouts.
ARM Linux Booting Process
One must be wondering How this Embedded Devices come to life? What goes into this devices that will tune to users Commands. We are going to explain about Embedded Arm based devices in general as The ARM architecture is a widely used 32-bit RISC processor architecture. In fact, the ARM family accounts for about 75% of all 32-bit CPUs, and about 90% of all embedded 32-bit CPUs.
A Hyper-V Monitoring Plugin that allows you to control Hyper-V virtual architectures easily. For more information visit the following webpage: http://pandorafms.com/index.php?sec=Library&sec2=repository&lng=es&action=view_PUI&id_PUI=587
Building your own Desktop Cloud EnvironmentJnaapti
As developers we have seen these problems:
Our development environments accumulate lots of applications and libraries over a period of months.
We are usually in the habit of installing everything in one machine.
We fear that we may screw up our development environment and that means unproductive man-hours.
We forget that a multi-machine deployment is different from a single machine deployment.
How about virtualization in the desktop?
In this demo, I will take you through the steps to create a multi-VM development environment.
This demo will make use of QEMU, KVM and Virt Manager and show you how you can create a VM image, and then start servers with a set of commands, deploy your app, test everything and tear down the environment once you are happy - all this in the cosy comforts of your laptop or desktop.
The Jnaapti development environment is based on this setup.
Scaling Magento - Reaching Peak Performance
Building a cluster to support Magento is easy and makes a good example for scalable web application platforms.
I will walk through a typical Magento Cluster setup and provide Vagrant/Puppet configurations for the basic setup. Then I will cover some of the hardware and cloud resources that are required as the platform grows. We will move onto application choices, and some of the development, testing and deployment strategies that are required to have a successful clustered platform.
* Hardware vs Cloud: Exploring hardware and software options available for scaling
* Cluster Architecture
* Web server: How to cluster your application
* Varnish: How to speed up response time using reverse proxy caching
* Database: How to cluster Magento Database using Percona
* Redis: How to set up a Redis Cluster using Sentinel and Keepalived
* Filesystem: NFS, NAS or other clustered file systems
* Application Architecture: How to avoid angering your systems administrators
* Testing: Exploring load testing with tools like Gatling and BlazeMeter
* Development and Deployment Process
https://joind.in/talk/view/13541
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
Software Test Automation - A Comprehensive Guide on Automated Testing.pdfkalichargn70th171
Moving to a more digitally focused era, the importance of software is rapidly increasing. Software tools are crucial for upgrading life standards, enhancing business prospects, and making a smart world. The smooth and fail-proof functioning of the software is very critical, as a large number of people are dependent on them.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
How GenAI Can Improve Supplier Performance Management.pdf
Hosted-Engine-4.3-deep-dive
1. Hosted Engine deep dive
Everything you always wanted to know
about oVirt Hosted Engine*
(*but were afraid to ask)
Simone Tiraboschi, Principal Software Engineer
January 2019
2. 2
AGENDA
● Brief getting started
○ Packages
○ Services
● Hosted engine deployment:
○ Vintage flow
○ New node 0 flow
○ Cleanup
○ UI - cockpit
● HE Maintenance types: local/global
● Technical details and internals
● Logs and troubleshooting:
○ Backup and restore
○ Major issues
● Q&A ???
3. Getting started
3
● Hosted engine is the simplest way to ensure HA capabilities for oVirt engine/RHV
manager
● The engine is installed on a VM (it saves hosts for virt purposes)
● The engine VM runs on hosts managed by the engine
● HE involved hosts can also run regular VMs
● Dedicated services on HE hosts will take care to keep the engine up
4. Related packages
● ovirt-engine-appliance/rhvm-appliance: it provides an up to date appliance with engine rpms
● ovirt-hosted-engine-setup: provides the setup and CLI utility
● ovirt-hosted-engine-ha: provides ha services:
○ ovirt-ha-agent:
■ Monitors local host state, engine VM status
■ Takes action if needed to ensure high availability
○ ovirt-ha-broker:
■ Liason between ovirt-ha-agent and:
● Shared storage (metadata)
● Local host status (monitoring)
■ Serializes requests
■ Separate, testable entity distinct from ovirt-ha-agent
5. ovirt-ha-broker
● Used by ovirt-ha-agent to read to/write from storage
● Has a set of monitors for host status:
○ Ping (gateway)
○ CPU load
○ Memory use
○ Management network bridge status
○ Engine VM status (at virt level)
○ Engine status (via http request to an health page)
● Listening socket:
/var/run/ovirt-hosted-engine-ha/broker.socket
6. ovirt-ha-agent
● Ensures ovirt-engine VM high availability
● Uses configuration file written by setup (/etc/ovirt-hosted-engine/hosted-engine.conf):
○ Host id, storage config, gateway address to monitor, …
● If Engine VM is not running, it's started
● If Engine is non-responsive, VM is restarted
● VM status read from vdsm getVmStats verb
● Engine status via engine liveness page:
http://<engine-fqdn>/OvirtEngineWeb/HealthStatus
7. ovirt-ha-agent - host score
● Single number (scalar) representing a host's suitability for running the engine VM
● Range is 0 (unsuitable) to 3400 (all is well)
● May change
● Calculated based on host status: each monitor (ping, cpu load,
gateway status, ...) has a weight and contributes to the score
● If VM is starting, starts on host with highest score If startup fails, host score is temporarily reduced,
allows other hosts to try starting the VM
● If VM is running, and a different host has much higher score, VM is migrated to the better host
● Current migration/restart threshold is 800 points E.g. gateway failure will trigger migration, cpu load
won't...
8. ovirt-ha-agent - host score penalties
https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/ovirt_hosted_engine_ha/agent/agent.conf
[score]
# NOTE: These values must be the same for all hosts in the HA cluster!
base-score=3400
gateway-score-penalty=1600
not-uptodate-config-penalty=1000
mgmt-bridge-score-penalty=600
free-memory-score-penalty=400
cpu-load-score-penalty=1000
engine-retry-score-penalty=50
cpu-load-penalty-min=0.4
cpu-load-penalty-max=0.9
9. ovirt-ha-agent - host score penalties
https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/ovirt_hosted_engine_ha/agent/states.py
Penalties are proportionally applied:
# Penalty is normalized to [0, max penalty] and is linear based on
# (magnitude of value within penalty range) / (size of range)
penalty = int(
(load_average - score_cfg['cpu-load-penalty-min']) /
(
score_cfg['cpu-load-penalty-max'] -
score_cfg['cpu-load-penalty-min']
) *
score_cfg['cpu-load-score-penalty']
)
penalty = max(0, min(score_cfg['cpu-load-score-penalty'],
penalty))
10. ovirt-ha-agent - migrate/restart
https://github.com/oVirt/ovirt-hosted-engine-ha/blob/master/ovirt_hosted_engine_ha/agent/states.py
If a different host has a score which is significatively best, the engine VM got shut down on the first host
and restarted on the second one (cold migration!)
if (new_data.best_score_host and
new_data.best_score_host["host-id"] != new_data.host_id and
new_data.best_score_host["score"] >= self.score(logger) +
self.MIGRATION_THRESHOLD_SCORE):
logger.error("Host %s (id %d) score is significantly better"
" than local score, shutting down VM on this host",
new_data.best_score_host['hostname'],
new_data.best_score_host["host-id"])
return EngineStop(new_data)
Local maintenance triggers a live migration instead
11. ovirt-ha-agent
● Only live hosts are considered for startup/migration… (let’s see maintenance mode...)
● If a host hasn't updated its metadata in a short while, it is considered dead
● Startup algorithm is eager/optimistic: if two hosts have same score, both will try to start VM
● Sanlock (volume lease) will allow only one to succeed
● Race can also happen due to hosts not seeing metadata updates at the same time
13. hosted-
engine-
setup
Engine
Configure, start
Setup host network (create management bridge)
Create SD
Create disks
Copy appliance disk over
Engine VM one
Start engine VM
Add host
ovirt-ha-
agent
Shutdown engine VM
Configure, start
Start engine VM
Vintage flow: <=4.1, deprecated in 4.2, removed in 4.3
oVirt engine is up and
running on its VM, first host
is there, engine VM and its
SD are not visible
Successfully terminated
VDSM
ovirt-ha-agent starts engine
VM from an initial conf file
14. engine VDSM
Query HA score and maintenance mode
Query running VMs and Storage domain statuses
Create SD
ovirt-ha-
agent
Vintage flow, part 2 - uncontrolled and up to the user
Create first data SD
User
Set the new SD as the
master SD
Autoimport engine SD
Autoimport engine VM
Generate OVF_STORE disks on the hosted-engine
SD with engine VM configuration
Only now the engine
VM and its SD are
visible in the engine
Only now ovirt-ha-agent is
ready to restart the engine
VM as configured by the
engine
The user can edit engine VM
configuration, an updated configuration will
be written by the engine in the
OVF_STORE volume via VDSM
Hosted-engine SD is not the
master storage domain!!!
15. Current ansible based flow (node - 0): goals
● Use standard engine flows, avoid re-implementing logic
● Replace complicated hosted engine import code in engine
● Keep all existing features
● Better input validation - get rid of execute, fail, retry cycle
● Phase out OTOPI in favour of Ansible
● Keep backward compatibility with our CLI and Cockpit flows
16. hosted-
engine-
setup
Engine on
bootstrap
VM
Engine from
the shared
storage
Configure, start
Configure a natted network
Start a local VM (over natted network)
from engine appliance disk
Add host
Add hosted-engine SD
ovirt-ha-
agent
Add disks
Current ansible based flow (node - 0)
libvirt VDSM
Configure, start
Create SD
Create disks
Add hosted-engine VM Update OVF_STORE disks
with engine VM definition
Shutdown bootstrap VM
Configure, start
Copy bootstrap engine vm disk
from local storage to the
shared storage
Start engine VM
Engine VM and its SD
are already visible in
the engine,
hosted-engine SD is
the master SD
Ovirt-ha-agent starts engine VM
from its definition as recorded in the
OVF_STORE by the engine
Everything is
correctly in
engine DB since
the beginning, no
autoimport!
17. Current ansible based flow (node - 0): benefits 1
● Reverse the vintage setup flow - start ovirt-engine first (instead of vdsm):
○ ovirt-engine already knows how to perform all the tasks we need, less bugs!
○ The bootstrap VM will use libvirt’s NATted network and local disk,
deployment is simpler on networking side
○ /etc/hosts is configured to resolve all the necessary names properly
● Only collect information needed for the bootstrap ovirt-engine to start:
1. Engine FQDN and credentials
2. The first host’s (self) hostname
18. Current ansible based flow (node - 0): benefits 2
● Only once we have a locally running engine:
1. Ask the user for storage connection data
2. Immediate validation possible through running ovirt-engine
All flows use the standard engine and vdsm logic!
● No SPM ID issues
● No vdsm issues with “undocumented” APIs
● All engine DB records reflect the latest and greatest storage features
All VM features match standard VMs (memory hotplug, migration profiles...)
No autoimport code, no value guessing - all defined using standard flows
19. Summary
Vintage (deprecated) flow
1. CLI and cockpit based
2. Fully custom setup (OTOPI)
3. Time to finish: ~30 min
4. No upfront storage validation
5. Custom code and flows
6. HE specific VM configuration
Node 0 flow
1. CLI and cockpit based
2. Setup based on Ansible*
3. Time to finish: ~30 min
4. Storage connection validated
5. Standard code and flows
6. Standard HE VM configuration
* OTOPI wrapper left for backwards
compatibility with answer files and so on
20. Cleanup
To cleanup a single HE host, from failed or successful deployments, the user can
run:
[root@c76he20190115h1 ~]# ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from
scratch.
Caution, this operation should be used with care.
Are you sure you want to proceed? [y/n]
It will not touch at all the shared storage, it’s up to the user to clean that if needed
22. On a clean host, the wizard in the
hosted-engine tab will let you choose
between Hosted-engine and
Hyperconverged setup
23. We have input a few values to
configure the engine VM; most
relevant:
● Engine VM FQDN
● Engine VM networking conf
● Engine VM root password
● MEM
● # of CPUs
24. If you choose to configure the
engine VM with DHCP, please
ensure to have a DHCP
reservation for the engine VM so
that its hostname resolves to the
address got from DHCP and
specify its mac address here
29. The install steps will be tracked
here.
Any issue will be shown here as a
failed task
30. On successful executions, you will
reach this intermediate step.
Now we have a local VM with a
running oVirt engine and we are
going to use to configure your
system
31. Now you should create your first
storage domain.
The SD parameters will be
validated by the engine.
The setup process will loop here
until we get a valid storage
domain.
32. Configuration summary...
...if OK, press here:
● the SD will be created,
● A VM and all the HE disks will be created on
that SD
● The bootstrap local VM will be shutdown
● And its disk moved to the shared storage over
the disk of the target VM created by the engine
● And the engine VM will be restarted from the
shared storage
No more need to manually add another SD from
engine webadmin, no more need to wait for the
auto import process
40. Maintenance modes
● For HE we have 2 maintenance modes:
○ Global maintenance mode
■ It’s for the engine VM: the VM will be not migrated/restarted by HA agent
■ It applies to all the HE hosts at the same time (it’s a flag on the shared storage)
■ It can be set from CLI or from the engine
○ Local maintenance mode
■ It’s for host related activities
■ It applies to a single host
■ It’s locally saved on host FS (/var/lib/ovirt-hosted-engine-ha/ha.conf)
■ Setting host maintenance mode from the engine will also imply hosted-engine local
maintenance mode for the same host, the opposite is not true
44. On the shared storage
● he_metadata
○ It’s a kind of whiteboard use by hosted-engine host to communicate.
○ Each host writes its metadata (status, local maintenance, score, update timestamp...) in its
specific sector (based on host_id, so it the got messed up...)
○ Each host can only write in its own sector but it will also read metadata from other hosts
● he_sanlock
○ Used with sanlock to protect engine VM disk and he_metadata global sector, based on host_id
● he_virtio_disk
○ It contains engine VM disk
● HostedEngineConfigurationImage
○ It contains a master copy of hosted-engine configuration used as a template adding a new HE
host from the engine
● OVF_STORE (two copies)
○ It contains VM definition (ovf+xml for libvirt of all the VMs on that SD )
○ Periodically (1h or on updates) rewritten by the engine
45. Temporary files
# Editing the hosted engine VM is only possible via the manager UIAPI
# This file was generated at Sun Jan 20 01:37:27 2019
cpuType=Haswell-noTSX
emulatedMachine=pc-i440fx-rhel7.6.0
vmId=ed1535c9-903b-415c-bd2f-f3346ca4f256
smp=4
memSize=4096
maxVCpus=64
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,s
smartcard,susbredir
xmlBase64=PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0nVVRGLTgnPz4KPGRvbWFp
biB4bWxuczpvdmlydC10dW5lPSJodHRwOi8vb3ZpcnQub3JnL3ZtL3R1bmUvMS4wIiB4bW
xuczpvdmlydC12bT0iaHR0cDovL292aXJ0Lm9yZy92bS8xLjAiIHR5cGU9Imt2bSI+PG5h
bWU+SG9zdGVkRW5naW5lPC9uYW1lPjx1dWlkPmVkMTUzNWM5LTkwM2ItNDE1Yy1iZDJmLW
YzMzQ2Y2E0ZjI1NjwvdXVpZD48bWVtb3J5PjQxOTQzMDQ8L21lbW9yeT48Y3VycmVudE1l
bW9yeT40MTk0MzA0PC9jdXJyZW50TWVtb3J5Pjxpb3RocmVhZHM+MTwvaW90aHJlYWRzPj
xtYXhNZW1vcnkgc2xvdHM9IjE2Ij4xNjc3NzIxNjwvbWF4TWVtb3J5Pjx2Y3B1IGN1cnJl
bnQ9IjQiPjY0PC92Y3B1PjxzeXNpbmZvIHR5cGU9InNtYmlvcyI+PHN5c3RlbT48ZW50cn
kgbmFtZT0ibWFudWZhY3R1cmVyIj5vVmlydDwvZW50cnk+PGVudHJ5IG5hbWU9InByb2R1
Y3QiPk9TLU5BTUU6PC9lbnRyeT48ZW50cnkgbmFtZT0idmVyc2lvbiI+T1MtVkVSU0lPTj
o8L2VudHJ5PjxlbnRyeS
…
/var/run/ovirt-hosted-engine-ha/vm.conf
Periodically
regenerated
parsing
OVF_STORE disks
Used by ovirt-ha-agent to start
engine VM via vdsm
On tmpfs, not
persisted!
If the xml for libvirt is there (in
base64) - engine >= 4.2 - vdsm
will direct send that to libvirt
ignoring everything else
46. Inside engine’s DB
[root@hosted-engine-01 tmp]# sudo -u postgres scl enable rh-postgresql10 -- psql -d engine -c "select vds_name, vds_spm_id, ha_score,
ha_configured, ha_active, ha_global_maintenance, ha_local_maintenance from vds"
vds_name | vds_spm_id | ha_score | ha_configured | ha_active | ha_global_maintenance | ha_local_maintenance |
is_hosted_engine_host
--------------+------------+----------+---------------+-----------+-----------------------+----------------------+--------------------
---
host_mixed_1 | 1 | 3400 | t | t | f | f | f
host_mixed_2 | 2 | 3400 | t | t | f | f | t
host_mixed_3 | 3 | 3400 | t | t | f | f | f
(3 rows)
It should ALWAYS
match host_id used in
/etc/ovirt-hosted-engin
e/hosted-engine.conf
on each host!
ha_score as
reported by the host
If hosted-engine
is configured on
this specific host
If hosted-engine
is active on this
specific host
If this host reported to be
in HE global maintenance
vds is a view based on vds_static,
vds_dynamic and vds_statistics
All of them refers to the last time the engine successfully queries VDSM!
The engine has no direct access to hosted-engine metadata!!!
This info could be outdated!
Computed by the engine
according to VM info as
reported by hosts
48. Setup Logs
All the logs are available under /var/log/ovirt-hosted-engine-setup/ on the host:
ovirt-hosted-engine-setup-ansible-get_network_interfaces-20183617355-er2ny7.log
ovirt-hosted-engine-setup-ansible-initial_clean-2018361784-i75e3e.log
ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20183617821-lhrq77.log
ovirt-hosted-engine-setup-ansible-create_storage_domain-201836172019-wh5qok.log
ovirt-hosted-engine-setup-ansible-create_target_vm-201836172117-o0rrxl.log
ovirt-hosted-engine-setup-ansible-final_clean-201836173040-ugmzwx.log
We have at least 6 playbooks until the end so at least 6 log files (1 more for FC device discovery, 2 more for
iSCSI discovery and login)
The setup is also trying to extract engine logs from engine VM disks, not always possible
49. Other relevant logs on the host
Other relevant logs on the host are:
/var/log/messages
/var/log/vdsm/vdsm.log
/var/log/vdsm/supervdsm.log
/var/log/libvirt/qemu/HostedEngineLocal.log
/var/log/libvirt/qemu/HostedEngine.log
50. Other relevant logs on the engine VM
Other relevant logs on the engine VM are:
/var/log/messages
/var/log/ovirt-engine/engine.log
/var/log/ovirt-engine/server.log
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20180406171311-c74he20180302h1.localdomain-301aca6d.log
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180406171312-c74he20180302h1.localdomain-301aca6d.
log
The target VM started from the shared storage should be accessible on the FQDN you set from any host in
the network.
The bootstrap local VM runs over a natted network so it’s accessible only from the host where it’s running;
/etc/hosts should already contain a record for that pointing on the right address on the natted subnet.
If not accessible over the network other means to reach it are:
hosted-engine --console
remote-viewer vnc://<host_address>:5900 (hosted-engine --add-console-password to set a
temporary VNC password)
51. HA services logs
Relevant logs:
/var/log/ovirt-hosted-engine-ha/agent.log
/var/log/ovirt-hosted-engine-ha/broker.log
The logs are usually at INFO level, the logs can be set at debug level editing
/etc/ovirt-hosted-engine-ha/agent-log.conf and broker-log.conf
[loggers]
keys=root
[handlers]
keys=syslog,logfile
[formatters]
keys=long,sysform
[logger_root]
level=INFO
handlers=syslog,logfile
propagate=0
[handler_syslog]
Set DEBUG
here and restart
the service
52. Backup and restore
● Since hosted-engine relies on a volume lease (VM lease weren’t available in oVirt when we started
hosted-engine), we cannot rely on disk snapshots for the engine VM
● For the same reason we cannot live storage migrate the engine VM from its storage domain to a
different one
● The best approach to backup hosted-engine is periodically running engine-backup inside the VM
● With 4.2.7 we introduced the capability to restore a backup on the fly deploying
a new hosted-engine VM:
hosted-engine --deploy --restore-from-file=file
Restore an engine backup file during the deployment
● The same tool could be used to migrate the engine VM from one storage domain to a new one or
from bare metal to hosted-engine via backup and restore
● CAREFULLY follow the documentation since ending with two active engine VM acting at the same
time over the same VMs could be really destructive
53. hosted-
engine-
setup
Engine on
bootstrap
VM
Engine from
the shared
storage
Configure, start
Configure a natted network
Start a local VM (over natted network)
from engine appliance disk
Add host
Add hosted-engine SD
ovirt-ha-
agent
Add disks
Restore: how it works
libvirt VDSM
Configure, start
Create SD
Create disks
Add hosted-engine VM Update OVF_STORE disks
with engine VM definition
Shutdown bootstrap VM
Configure, start
Copy bootstrap engine vm disk
from local storage to the
shared storage
Start engine VM
The provided backup file is injected here and
restored via ansible before starting the engine
TRICKY PART:
The backup we just restore on the engine contains
other hosts, other storage domains, other networks
and so on and maybe also with outdated info.
If, for instance, just one of the storage domain
recorded in the DB could not be connected the host
will be declared as non operational by the engine
and we cannot complete the deployment on a non
operational host.
In this case the best option is to temporary restore
on a new datacenter (the setup will create it on the
fly) to be sure that the deployment can successfully
complete. Once the restored engine is up and
running the user can connect to the engine and fix
what’s wrong and then repeat the procedure to add
back the host in the desired datacenter and cluster
54. Major issues: 1. spm host id mashup
host_id is locally stored on each host in /etc/ovirt-hosted-engine/hosted-engine.conf
It’s also stored for each host in the engine DB as vds_spm_id
If, due to human errors, reactivation of old decommissioned hosts, backup… two hosts tries to
use the same spm host id the stranger behaviour could be reported:
● Incongruent HA scores reported by the hosts
● Sanlock issues
● ...
SOLUTION: manually align all the host_id locally saved on host side to what’s recorded in the
engine DB (master copy!)
55. Major issues: 2. corrupted metadata volume
Due to storage issues or power outages and so on the metadata volume can be corrupted, if
so it can be cleaned running (just of one of hosted-engine hosts):
[root@c76he20190115h1 ~]# hosted-engine --clean-metadata --help
Usage: /sbin/hosted-engine --clean_metadata [--force-cleanup]
[--host-id=<id>]
Remove host's metadata from the global status database.
Available only in properly deployed cluster with properly stopped
agent.
--force-cleanup This option overrides the safety checks. Use at your
own
risk DANGEROUS.
--host-id=<id> Specify an explicit host id to clean
If specified, it will
clean only the sector
of a specific host
If specified, it will ignore any
safety check about other active
hosts
56. Major issues: 3. corrupted lockspace volume
Due to storage issues or power outages and so on the lockspace volume can be corrupted,
you can notice it from sanlock related errors in broker.log and sanlock.log. If so it can be
cleaned with
# on each HE host
systemctl stop ovirt-ha-agent ovirt-ha-broker
sanlock client shutdown -f 1 # carefully, it could trigger the watchdog and reboot
# on a single host
hosted-engine --reinitialize-lockspace
# on each HE host
systemctl start ovirt-ha-agent ovirt-ha-broker
57. Major issues: 4. Engine-setup refuses to execute
● Engine-setup is going to shut down the engine VM for setup tasks while, if not in global maintenance
mode, ovirt-ha-agent is going to motor engine health and potentially restart the engine VM to gain a
working engine
● Shutting down the engine VM in the middle of a yum, engine or DB upgrade could lead to bad results
so engine-setup is trying to detect global maintenance status as recorded in the engine DB
● The point is that the DB could contain outdated info that the engine is not able to refresh for different
reasons and so we can have a false positive where engine-setup refuses to upgrade requiring global
maintenance mode while the env is already there
SOLUTION: if, and only if, the user is absolutely sure that the engine is really in global maintenance mode,
he can execute engine-setup with
--otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True to completely
skip the check about global maintenance mode as recorded in the DB (potentially dangerous!)
58. ● The user can edit the engine VM from the engine itself.
● The new configuration will be written in the OVF_STORE and consumed on the next run
(the OVF_STORE update will be sync, expect a few seconds delay to complete).
● The engine should validate the configuration preventing upfront major mistakes
● If for any reason the VM could not start with the new configuration the user can still:
# copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere else
# edit it as needed (rememend that the libvirt XML will win over anything else if
there, so fix it or remove it)
# start the engine VM with customized configuration with:
hosted-engine --vm-start --vm-conf=/root/myvm.conf
# fix what’s wrong from the engine, and try a new boot from the OVF_STORE definition
Major issues: 5. Engine VM refuses to start
60. Q&A: questions from session 1
● Ansible deployment failed - where to look for errors?
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible*
It’s always in verbose mode
● Hosted Engine VM is unmanaged - how to recover?
It could happen in 4.1 if for any instance the autoimport code was failing. It couldn’t happen anymore by design with the new flow.
● HE VM is corrupted, what is the best way to recover?
The best way to recover is using the restore flow starting from the last backup. If no backup is available, try a fresh deployment
manually importing all the existing storage domains at the end and manually adding back all the hosts.
● How can we modify HE VM from vm.conf and persist it?
vm.conf is periodically regenerated starting from OVF_STORE content, if the engine is available the best option is to edit the VM
configuration from the engine. If the engine is not available, we can make a copy of vm.conf, fix what is needed and than try to start the
engine VM with a custom vm.conf with hosted-engine --vm-start --vm-conf=/root/myvm.conf
No change is going to be persisted and so the user has to manually fix the engine VM definition again from the engine one up and
running.
61. Q&A: questions from session 2
● Cannot add host to HE cluster (deploy as HE host)?
Check engine.log and host-deploy logs on the engine VM
● Host vm-status is different from the rest of the cluster?
Probably one of the host is failing to access the storage to write its score or read other ones from the metadata volume.
Please check update timestamps and broker.log for storage related errors.
● HE VM would not start - what to do?
Check agent.log, broker.log, libvirt and vdsm logs for errors. Fix as required (for instance regenerate lockspace or metadata volume or
provide a custom vm.conf).
● When recovering from backup - which appliance to take? For instance we are on 4.2 now, but original deployment was
on 4.1
Normally the best option is latest .z version of the same major used at backup time since engine-backup is not supposed to be an
upgrade tool.
For instance if the backup was took with 4.1 it will be safer to apply it on 4.1 based appliance.
On migrations it will be safer to upgrade before the migration if needed.