The document describes how to move a data center to a new location within 35 days. It outlines the key steps taken each week which included retiring and virtualizing servers, designing a new backup architecture, receiving new cabinets and cooling units, moving servers between locations, and final testing at the new site. The move was successful with over 110 servers transferred and backups established for 80 servers within the 35 day timeline. Tips are provided such as planning extensively, thorough documentation, clear communication, having backup plans, and thorough testing.
From Hello World to Real World - Container Days Boston 2016Shannon Williams
From Hello World to Real World: Creating a Production-Grade Container Environment - Bill Maxwell & Shannon Williams
Containers are lightweight, portable and easy to orchestrate, so the enthusiasm for running applications in them is understandable. Once you get past the "hello world" moment of deploying a single container app, though, you quickly realize that running complex apps using containers in production takes a little more work.
Bill and Shannon will walk through building a production-grade container environment from the ground up: from the first deployment of a container, through considerations for building a registry, to introducing container monitoring and logging and plugging containers into your existing CI/CD. They'll look at the transition from scripting and automation tools to cluster management and orchestration, and how service discovery and application templates quickly become key elements to deploying complex applications.
The journey will continue on to container networking, load balancing and config injection, as well as how to manage secrets, define access control policies, and provide visibility and control for your new container service. Along the way, Bill and Shannon will be demonstrating different tools, talking about some of the issues you'll run into, and discussing lessons the community has learned about production-grade container environments so far.
Starting a Business Questions, Answers, Polls & Debateserna8nielsen65
This document contains a collection of questions, answers, polls and debates related to starting a business. There are over 50 discussions on various topics such as what type of business someone would like to start, how to start specific types of businesses like a dollar store or pest control business, how to find funding to start a business, and questions about legal requirements for different businesses. The discussions cover a wide range of industries and questions that entrepreneurs may encounter when planning to start their own company.
The document discusses hybrid cloud and how Extreme Networks uses hybrid cloud solutions. It provides examples of how Extreme Networks uses VMware vCloud Air to dynamically spin up virtual machines for remote training classes to reduce costs. It also evaluates and scores various cloud providers like AWS, VMware vCloud Air, and Azure based on factors like the company, legal, openness, usability, and development. It stresses the importance of understanding infrastructure needs before moving applications to cloud.
This document provides an overview of an ICAO workshop on cabin crew competency-based training. The workshop aims to introduce ICAO standards and guidance material related to cabin crew training. It will cover topics such as the ICAO competency framework, transitioning from traditional to competency-based training, and practical exercises for developing competency-based training scenarios. The workshop schedule provides the timing and topics to be covered each day.
Cisco uses ThousandEyes to monitor cloud services and gain visibility into network stability and troubleshooting. ThousandEyes helps Cisco reduce the mean time to troubleshoot issues by 43% and the mean time to restore services by 8%. ThousandEyes has successfully helped Cisco resolve issues with WebEx, Salesforce, firewalls in India, and support apps in India. Cisco's goal is to run ThousandEyes on Cisco routers to improve monitoring.
David Epperson interned at Energizer Holdings for 12 weeks where he gained hands-on experience in server administration, networking, end-user support, and information security. Over the course of the internship, he learned to build virtual machines, configure Hyper-V clusters, set up new phone systems, track assets, help desk support, and more. The internship provided valuable experience beyond his coursework and better prepared him for an IT career.
From Hello World to Real World - Container Days Boston 2016Shannon Williams
From Hello World to Real World: Creating a Production-Grade Container Environment - Bill Maxwell & Shannon Williams
Containers are lightweight, portable and easy to orchestrate, so the enthusiasm for running applications in them is understandable. Once you get past the "hello world" moment of deploying a single container app, though, you quickly realize that running complex apps using containers in production takes a little more work.
Bill and Shannon will walk through building a production-grade container environment from the ground up: from the first deployment of a container, through considerations for building a registry, to introducing container monitoring and logging and plugging containers into your existing CI/CD. They'll look at the transition from scripting and automation tools to cluster management and orchestration, and how service discovery and application templates quickly become key elements to deploying complex applications.
The journey will continue on to container networking, load balancing and config injection, as well as how to manage secrets, define access control policies, and provide visibility and control for your new container service. Along the way, Bill and Shannon will be demonstrating different tools, talking about some of the issues you'll run into, and discussing lessons the community has learned about production-grade container environments so far.
Starting a Business Questions, Answers, Polls & Debateserna8nielsen65
This document contains a collection of questions, answers, polls and debates related to starting a business. There are over 50 discussions on various topics such as what type of business someone would like to start, how to start specific types of businesses like a dollar store or pest control business, how to find funding to start a business, and questions about legal requirements for different businesses. The discussions cover a wide range of industries and questions that entrepreneurs may encounter when planning to start their own company.
The document discusses hybrid cloud and how Extreme Networks uses hybrid cloud solutions. It provides examples of how Extreme Networks uses VMware vCloud Air to dynamically spin up virtual machines for remote training classes to reduce costs. It also evaluates and scores various cloud providers like AWS, VMware vCloud Air, and Azure based on factors like the company, legal, openness, usability, and development. It stresses the importance of understanding infrastructure needs before moving applications to cloud.
This document provides an overview of an ICAO workshop on cabin crew competency-based training. The workshop aims to introduce ICAO standards and guidance material related to cabin crew training. It will cover topics such as the ICAO competency framework, transitioning from traditional to competency-based training, and practical exercises for developing competency-based training scenarios. The workshop schedule provides the timing and topics to be covered each day.
Cisco uses ThousandEyes to monitor cloud services and gain visibility into network stability and troubleshooting. ThousandEyes helps Cisco reduce the mean time to troubleshoot issues by 43% and the mean time to restore services by 8%. ThousandEyes has successfully helped Cisco resolve issues with WebEx, Salesforce, firewalls in India, and support apps in India. Cisco's goal is to run ThousandEyes on Cisco routers to improve monitoring.
David Epperson interned at Energizer Holdings for 12 weeks where he gained hands-on experience in server administration, networking, end-user support, and information security. Over the course of the internship, he learned to build virtual machines, configure Hyper-V clusters, set up new phone systems, track assets, help desk support, and more. The internship provided valuable experience beyond his coursework and better prepared him for an IT career.
The team lost a Hadoop competition hosted by Etu due to poor performance of their VirtualBox environment. They deployed Hadoop with Namenode HA and Kerberos using Hadooppet, but VirtualBox's hyperthreading caused sluggishness. Their architecture of large VMs with many vCPUs did not match VirtualBox's ability to handle multi-core workloads efficiently. They have learned not to assume virtual environments will perform like physical servers, and to test different deployment options rather than focusing only on their deployment tool.
The document discusses a company that faced scalability issues with their infrastructure as their product Inoreader grew in popularity. To address these issues, they migrated their servers to a fully virtualized environment using OpenNebula for virtualization and StorPool for distributed storage. This provided significant performance gains and increased capacity, allowing them to run the same workloads with fewer physical servers and benefit from high availability and redundancy. The migration process involved setting up StorPool nodes, virtualizing servers incrementally, and migrating VMs and data to the new infrastructure over multiple iterations.
Neo4j GraphTalks Zurich - Taming the Complexity of Network & IT OpsNeo4j
The document discusses using graphs and Neo4j to improve network and IT operations. It describes how networks can be modeled as graphs and how this allows for more effective proactive dependency analysis and reactive root cause analysis when issues arise. Examples of specific scenarios and use cases are provided, as well as a discussion of the types of organizations that are using Neo4j for these purposes.
This document summarizes the network infrastructure deployed for the 2008 World Youth Day event in Sydney, Australia. Over 400,000 visitors and 3,000 media personnel were expected. The network had to support three major venues and work on the first try with only 3 months of preparation. An International Media Centre was set up as a 24/7 operation for 15 days, requiring 800 network ports and fiber optic cabling. Network pods were used at each site to improve resilience. The event was a success with few user complaints and reliable internet access throughout.
Automated Performance Testing for Desktop Applications by Ciprian Balea3Pillar Global
PowerPoint presentation by 3Pillar's Ciprian Balea, QA Lead, which was delivered at the Romanian Testing Conference (RTC) 2014 in Cluj-Napoca, Romania on May 15, 2014.
The team plans to upgrade their existing data center by relocating it to a new building and expanding their server capacity. They will lease a new colocation space and purchase 20 new Dell servers and a Cisco switch. The project will be completed in 8 phases, including acquiring the new location, purchasing hardware, setup, beta testing, release, and closeout. A communications plan, budget, and risk management plan are also included.
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebula Project
The Science and Technology Facilities Council is a UK Research Council which funds research and provides large facilities to the UK Scientific Community. This includes running a Tier 1 site for the LHC computing project, the JASMIN Super Data Cluster and a number of other HPC and HTC facilities. The Scientific Computing Department at the Rutherford Appleton Laboratory has been developing a cloud for use across both sites of the Department and in the wider scientific community. This is an OpenNebula backed by Ceph block storage. I will give a brief background of the project, describe our set up, some use cases and the work we have done around OpenNebula (including a simplified web front-end and a number of hooks to provide us with traceability). I will also discuss how we are creating an elastic boundary between our HTC batch farm and cloud.
Author Biography
I am a Systems Administrator in the Scientific Computing Department of the UK’s Science and Technology Facilities Council. I work as part of the cloud team and I also work on a number of Grid services including our HTC batch farm for the LHC computing project.
Prior to my position here I worked in IT at a SMB focusing on Storage and Virtualisation, in particular Hyper-V and VMWare.
Continuous Delivery with Jenkins and Wildfly (2014)Tracy Kennedy
A presentation on a continuous delivery pipeline that leverages Jenkins Enterprise, Jenkins Operations Center, Nexus, HAProxy, and Wildfly. Pipeline components run in Docker containers along with SkyDock/SkyDNS for service discovery and NSEnter for command-line access to containers.
This document summarizes the process of moving a datacenter from an old converted kitchen space to a new dedicated facility over a 4 day period. Key aspects of the move included 3 years of planning, virtualizing servers, upgrading storage and networking, and improving disaster recovery capabilities. The move improved stability, reduced power outages and failures, separated test and production systems, and enabled remote management. It established an effective disaster recovery plan and provided a flexible computing environment to support the organization's move to a new headquarters location with minimal business disruption.
OpenNebulaConf2018 - How Inoreader Migrated from Bare-Metal Containers to Ope...OpenNebula Project
See how Inoreader migrated from Bare-metal servers to OpenNebula + StorPool. Inoreader has reached a tipping point where it was no longer sustainable to add hardware servers to store the billions of articles that hundreds of thousands of users read every day across the globe. With OpenNebula and StorPool we can now utilize those servers far more efficiently and no longer worry about performance and downtime.
This project aims to monitor structured things in real-time for cracks and bending using wireless sensors. The sensors detect physical data from structures and send it to the cloud for processing and analysis. This allows users to identify damages and potential hazards, saving lives. The proposed system uses MEMS and vibration sensors with low-power WiFi modules to wirelessly transmit sensor readings to the cloud for viewing on mobile apps. This provides a robust, flexible and cost-effective alternative to existing wired structural monitoring systems.
Blake Krone gives a presentation on advanced RF design and troubleshooting. He discusses how design goals have changed from prioritizing coverage to prioritizing capacity as mobile device usage has increased. He emphasizes the importance of considering airtime, SNR, frequency reuse and channel planning, and network infrastructure in RF design. Krone also discusses using tools like site survey software, spectrum analyzers, and testing devices to help demystify and improve RF design and troubleshooting.
Next Gen Storage and Networking in Container Environments - September 2016 Ra...Shannon Williams
Since its initial release, Rancher has included cross-host networking for containers. For storage, we introduced the Convoy storage driver framework, which enabled Docker containers to access persistent storage volumes implemented by NFS or EBS.
Now, in Rancher 1.2 we're introducing fully pluggable networking and storage frameworks for Rancher environments. In our September 2016 Online meetup we introduced this concept and demonstrated how to deploy new Rancher storage services.
Harnessing the Power of Master/Slave Clusters to Operate Data-Driven Business...Continuent
This document discusses Continuent Tungsten, a product that provides high availability, performance scaling, and data management for MySQL database clusters. It introduces Continuent and describes how Tungsten uses master-slave replication across multiple database servers to provide high performance, high availability, and transparent read/write access to applications. The document also provides examples of how Tungsten handles transactions, load balancing, failover, and rolling upgrades without downtime.
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...Masaaki Nakagawa
DOCOMO MAIL is 24/7 cloud mail system which has accesses from over 20 million people. This mail system stores user's mail archive in OpenStack Swift with Peta Byte scale capacity deployed by NTT DATA.
We have been successfully operating this service since Sep 2014 without any downtime. In this session, we'll present the actual issues and challenges we have faced and conquered.
Here're some specific points we'd like to highlight.
* No service degrade, no downtime.
* Massive scale and still growing.
* Hundreds of servers operated by few people.
The document discusses high performance infrastructure for Server Density which includes 150 servers that have been running since June 2009 and migrated from MySQL to MongoDB. It stores 25TB of data per month. Key aspects of performance discussed are using fast networks like 10 Gigabit Ethernet on AWS, ensuring high memory, using SSDs over spinning disks for performance, and factors like replication lag based on location. The document also compares options like using cloud, dedicated servers, or colocation and discusses monitoring, backups, dealing with outages, and other operational aspects.
The document summarizes an internship at a county MIS department where the intern worked on several IT projects including setting up cubicles, recycling old data by destroying hard drives and tapes, building a "lamp server" running Ubuntu and WordPress, installing various software and upgrades, building a domain with a Windows server, and learning about networking equipment, applications, and telecommunications. The internship provided hands-on experience with servers, switches, computers, printers, and other IT equipment as well as applications like Windows, Linux, MySQL, PHP, and virtualization software.
The document outlines a plan for a new client/server network for Waterfront Tele-Support. Key points include:
- The current 10base2 network needs replacing with a new up-to-date infrastructure.
- An extended star topology is proposed using switches, servers, firewall and wireless access.
- The server will run Windows Server 2008 and provide file storage, web hosting, remote access, backups and more.
- Security measures like firewalls, filtering and permissions are included to protect the network.
- Budget is £30,000 and all new equipment will be purchased within this budget.
The presentation summarizes a project to rebuild the network infrastructure for Commerce Technical Schools. The project will involve installing new computers, phones, printers, and cabling across three school sites. Key details include an estimated cost of $252,342.44, a projected timeline of 45 days, and an overview of the network design, equipment, and security measures to be implemented. The project team, led by David Wischhusen as project manager, will work to deliver the new infrastructure on schedule and within budget while meeting necessary industry standards and regulations.
A disparate network architecture and communication outages led to downtime and production loss at a Jack Daniels bottling facility. By transitioning to a scalable, secure plantwide CPwE architecture, Jack Daniels cut downtime, improved the diagnostics available to operations and maintenance personnel, and prepared to connect enterprise and process networks.
The team lost a Hadoop competition hosted by Etu due to poor performance of their VirtualBox environment. They deployed Hadoop with Namenode HA and Kerberos using Hadooppet, but VirtualBox's hyperthreading caused sluggishness. Their architecture of large VMs with many vCPUs did not match VirtualBox's ability to handle multi-core workloads efficiently. They have learned not to assume virtual environments will perform like physical servers, and to test different deployment options rather than focusing only on their deployment tool.
The document discusses a company that faced scalability issues with their infrastructure as their product Inoreader grew in popularity. To address these issues, they migrated their servers to a fully virtualized environment using OpenNebula for virtualization and StorPool for distributed storage. This provided significant performance gains and increased capacity, allowing them to run the same workloads with fewer physical servers and benefit from high availability and redundancy. The migration process involved setting up StorPool nodes, virtualizing servers incrementally, and migrating VMs and data to the new infrastructure over multiple iterations.
Neo4j GraphTalks Zurich - Taming the Complexity of Network & IT OpsNeo4j
The document discusses using graphs and Neo4j to improve network and IT operations. It describes how networks can be modeled as graphs and how this allows for more effective proactive dependency analysis and reactive root cause analysis when issues arise. Examples of specific scenarios and use cases are provided, as well as a discussion of the types of organizations that are using Neo4j for these purposes.
This document summarizes the network infrastructure deployed for the 2008 World Youth Day event in Sydney, Australia. Over 400,000 visitors and 3,000 media personnel were expected. The network had to support three major venues and work on the first try with only 3 months of preparation. An International Media Centre was set up as a 24/7 operation for 15 days, requiring 800 network ports and fiber optic cabling. Network pods were used at each site to improve resilience. The event was a success with few user complaints and reliable internet access throughout.
Automated Performance Testing for Desktop Applications by Ciprian Balea3Pillar Global
PowerPoint presentation by 3Pillar's Ciprian Balea, QA Lead, which was delivered at the Romanian Testing Conference (RTC) 2014 in Cluj-Napoca, Romania on May 15, 2014.
The team plans to upgrade their existing data center by relocating it to a new building and expanding their server capacity. They will lease a new colocation space and purchase 20 new Dell servers and a Cisco switch. The project will be completed in 8 phases, including acquiring the new location, purchasing hardware, setup, beta testing, release, and closeout. A communications plan, budget, and risk management plan are also included.
OpenNebulaConf2015 1.07 Cloud for Scientific Computing @ STFC - Alexander DibboOpenNebula Project
The Science and Technology Facilities Council is a UK Research Council which funds research and provides large facilities to the UK Scientific Community. This includes running a Tier 1 site for the LHC computing project, the JASMIN Super Data Cluster and a number of other HPC and HTC facilities. The Scientific Computing Department at the Rutherford Appleton Laboratory has been developing a cloud for use across both sites of the Department and in the wider scientific community. This is an OpenNebula backed by Ceph block storage. I will give a brief background of the project, describe our set up, some use cases and the work we have done around OpenNebula (including a simplified web front-end and a number of hooks to provide us with traceability). I will also discuss how we are creating an elastic boundary between our HTC batch farm and cloud.
Author Biography
I am a Systems Administrator in the Scientific Computing Department of the UK’s Science and Technology Facilities Council. I work as part of the cloud team and I also work on a number of Grid services including our HTC batch farm for the LHC computing project.
Prior to my position here I worked in IT at a SMB focusing on Storage and Virtualisation, in particular Hyper-V and VMWare.
Continuous Delivery with Jenkins and Wildfly (2014)Tracy Kennedy
A presentation on a continuous delivery pipeline that leverages Jenkins Enterprise, Jenkins Operations Center, Nexus, HAProxy, and Wildfly. Pipeline components run in Docker containers along with SkyDock/SkyDNS for service discovery and NSEnter for command-line access to containers.
This document summarizes the process of moving a datacenter from an old converted kitchen space to a new dedicated facility over a 4 day period. Key aspects of the move included 3 years of planning, virtualizing servers, upgrading storage and networking, and improving disaster recovery capabilities. The move improved stability, reduced power outages and failures, separated test and production systems, and enabled remote management. It established an effective disaster recovery plan and provided a flexible computing environment to support the organization's move to a new headquarters location with minimal business disruption.
OpenNebulaConf2018 - How Inoreader Migrated from Bare-Metal Containers to Ope...OpenNebula Project
See how Inoreader migrated from Bare-metal servers to OpenNebula + StorPool. Inoreader has reached a tipping point where it was no longer sustainable to add hardware servers to store the billions of articles that hundreds of thousands of users read every day across the globe. With OpenNebula and StorPool we can now utilize those servers far more efficiently and no longer worry about performance and downtime.
This project aims to monitor structured things in real-time for cracks and bending using wireless sensors. The sensors detect physical data from structures and send it to the cloud for processing and analysis. This allows users to identify damages and potential hazards, saving lives. The proposed system uses MEMS and vibration sensors with low-power WiFi modules to wirelessly transmit sensor readings to the cloud for viewing on mobile apps. This provides a robust, flexible and cost-effective alternative to existing wired structural monitoring systems.
Blake Krone gives a presentation on advanced RF design and troubleshooting. He discusses how design goals have changed from prioritizing coverage to prioritizing capacity as mobile device usage has increased. He emphasizes the importance of considering airtime, SNR, frequency reuse and channel planning, and network infrastructure in RF design. Krone also discusses using tools like site survey software, spectrum analyzers, and testing devices to help demystify and improve RF design and troubleshooting.
Next Gen Storage and Networking in Container Environments - September 2016 Ra...Shannon Williams
Since its initial release, Rancher has included cross-host networking for containers. For storage, we introduced the Convoy storage driver framework, which enabled Docker containers to access persistent storage volumes implemented by NFS or EBS.
Now, in Rancher 1.2 we're introducing fully pluggable networking and storage frameworks for Rancher environments. In our September 2016 Online meetup we introduced this concept and demonstrated how to deploy new Rancher storage services.
Harnessing the Power of Master/Slave Clusters to Operate Data-Driven Business...Continuent
This document discusses Continuent Tungsten, a product that provides high availability, performance scaling, and data management for MySQL database clusters. It introduces Continuent and describes how Tungsten uses master-slave replication across multiple database servers to provide high performance, high availability, and transparent read/write access to applications. The document also provides examples of how Tungsten handles transactions, load balancing, failover, and rolling upgrades without downtime.
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...Masaaki Nakagawa
DOCOMO MAIL is 24/7 cloud mail system which has accesses from over 20 million people. This mail system stores user's mail archive in OpenStack Swift with Peta Byte scale capacity deployed by NTT DATA.
We have been successfully operating this service since Sep 2014 without any downtime. In this session, we'll present the actual issues and challenges we have faced and conquered.
Here're some specific points we'd like to highlight.
* No service degrade, no downtime.
* Massive scale and still growing.
* Hundreds of servers operated by few people.
The document discusses high performance infrastructure for Server Density which includes 150 servers that have been running since June 2009 and migrated from MySQL to MongoDB. It stores 25TB of data per month. Key aspects of performance discussed are using fast networks like 10 Gigabit Ethernet on AWS, ensuring high memory, using SSDs over spinning disks for performance, and factors like replication lag based on location. The document also compares options like using cloud, dedicated servers, or colocation and discusses monitoring, backups, dealing with outages, and other operational aspects.
The document summarizes an internship at a county MIS department where the intern worked on several IT projects including setting up cubicles, recycling old data by destroying hard drives and tapes, building a "lamp server" running Ubuntu and WordPress, installing various software and upgrades, building a domain with a Windows server, and learning about networking equipment, applications, and telecommunications. The internship provided hands-on experience with servers, switches, computers, printers, and other IT equipment as well as applications like Windows, Linux, MySQL, PHP, and virtualization software.
The document outlines a plan for a new client/server network for Waterfront Tele-Support. Key points include:
- The current 10base2 network needs replacing with a new up-to-date infrastructure.
- An extended star topology is proposed using switches, servers, firewall and wireless access.
- The server will run Windows Server 2008 and provide file storage, web hosting, remote access, backups and more.
- Security measures like firewalls, filtering and permissions are included to protect the network.
- Budget is £30,000 and all new equipment will be purchased within this budget.
The presentation summarizes a project to rebuild the network infrastructure for Commerce Technical Schools. The project will involve installing new computers, phones, printers, and cabling across three school sites. Key details include an estimated cost of $252,342.44, a projected timeline of 45 days, and an overview of the network design, equipment, and security measures to be implemented. The project team, led by David Wischhusen as project manager, will work to deliver the new infrastructure on schedule and within budget while meeting necessary industry standards and regulations.
A disparate network architecture and communication outages led to downtime and production loss at a Jack Daniels bottling facility. By transitioning to a scalable, secure plantwide CPwE architecture, Jack Daniels cut downtime, improved the diagnostics available to operations and maintenance personnel, and prepared to connect enterprise and process networks.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
My name is Rich Casselberry. I run the network and security for Extreme Networks. We are the fourth largest network equipment provider in the world. We have some unique features that give great visibility and control in the data center that you aren’t going to hear about today. Because today isn’t about that.
We all hear about super cool new or upcoming technology too. I read an article last month about Disaster avoidance. Apparently it’s not cool to be able to recover from a disaster, instead people are just moving their virtual data centers somewhere else while the hurricane, snowstorm or tornado goes through and then move it back on the fly. Very cool. I wish I could do that, but like a lot of people I still have a data center. A real one with air conditioning and UPSes. I’m not talking about that either
Thanks..
“Don’t take this the wrong way but”
Yeah we all know what that means right? That means the person telling you that is about to call you an idiot and they are hoping by saying that first that you won’t realize it giving them enough time to get away before you figure it out.
A friend of mine is a freelance writer and was doing a story on the dumb things we have done in IT. He asked for stories and having been in IT for a long time I quickly fired off a list of some of the top of mind blunders. He emailed me back in 5 minutes with “Dude, we need to talk.”
We scheduled a 30 minute call and 90 minutes later he said “Don’t take this the wrong way but how is it you haven’t been fired?”
How could I take that the wrong way right?
When Sherry first asked me to talk here I was flattered. Deep down though I knew it was because she knows how many incredibly stupid things I have done and really what she wanted was for me to share them so you don’t make the same dumb mistakes..
This is the story about one of those.
Instead I wanted to share one of my biggest blunder, well collection of blunders. What happens when the company that gives you colo data center decides that you aren’t a strategic customer and actually asks you to leave… in 45 days.
We were using a colo data center space and on February 4th, my sisters birthday actually, 8 years ago got a letter that basically said “pursuant to section 3, paragraph c, we are required to give you not less than 45 days to vacate the facility”
Lesson 1 – Read the fine print…
Everyone knows it takes 12-18 months to move a data center, If you are aggressive maybe 6 months. Yet most contracts have the ability to terminate your contract with 45 days. Notice from either party.
Now to be fair we had been in the data center for over 2 years and our contract had expired but we were paying month to month. We were also pretty open that we were moving our data center back in house so we could use it to show customers how we build data center networks and manage our data center. But we also were clear that we were going to move in late fall, not February.
So I called our sales rep and said “really 45 days? I can’t even get a circuit that fast”
He replied with “Oh yeah I meant to give you a heads up on that. I did talk to corporate and they agreed to let you stay if you sign a 2 year contract, at double the square footage cost.”
He went on to explain that the price they were charging us was causing them to lose money and that it was significantly below market rates. Now I’m pretty in touch with the market rates in Boston and while we have a good rate, it’s not half price. To put it in perspective for us this meant almost 1.2m. We didn’t really have a plan so I politely got out of the call.
Which brings up lesson 2.
We had plans and had started building out new power feeds, UPS, switchboards, 360tons of cooling and a 4000sq.ft data center on site, but there was no way it would be ready in time.
We didn’t have enough space, power or cooling to adequately fit in our existing data rooms and didn’t have any idea if we could do it.
I met with the team that night and remarkably no one said “Can’t be done”. To me that was the most amazing part, no one gave up. We took a quick look on our existing room and like probably a lot of data center, there was a lot of stuff that was old, some stuff turned off and cabinets that were half full.
We had our electrician measure the power used and found someone that would rent us 4 ton portable air conditioners. We thought it might be possible so I went to the CIO. He thought I was nuts, but he was open to it. We had 3 days to convince him we could do it.
We did a power audit, space audit and decided it was close, but we also were able to free up 5 cabinets for an old remote lab that we planned to decommission but it had never really been important enough to do. We made it important enough to do. That 5 cabinets gave us the extra room we needed and the momentum to convince the executives we could do it.
So I called our sales rep back and said “You know we thought about what corporate offered and what I’d like you to do is go back to corporate and tell them to take a big bite out of my ass. We’re leaving and will be out in 5 weeks”
We knew that everything else we were doing was on hold for the next 6 weeks so the first thing we did was send an email to the entire company, letting them know what was going on.
We were still really tight on power, not just power in the data center, but power to the whole building. At one point we calculated we had a spare .7 amps if we moved everything and the AC’s were running. That was a bit too tight so we spent a lot of the first week sweeping the building and were pretty ruthless about turning off anything, personal refrigerators, old monitors, space heaters.
We also had another team look to see what we could virtualize. VMware was still pretty new but we had been using it for test and development just not production. We virtualized 35 systems that first week. We also designed our backup system since the data center managed all of that for us. Luckily we have good relationships with most of our vendors and they jumped through hoops to help us.
We also were halfway through a storage migration from EMC to Lefthand storage. We pretty quickly realized we could use that extra space and move our virtual machines over the network. And we met with the applications team and the rest of IT to make sure every knew the schedule and plan and worked up a pretty robust application list broken up by criticality
We had planned to meet some local data center companies, including one that was in the same business part as us just in case, but decided to cancel once we were sure we could pull it off. Hedging our bets seemed too make it more risky and less certain so we doubled down on the move.
Like I sa
We started week 2 fully committed to the plan and with the full support of the company.
We ordered our backup environment, new cabinets and network gear and AC units.
We identified 35 test and development servers that we could move. With no objections we loaded them in a truck on Wednesday and by 11:00 they were in our data center. By 5:00 they were racked, and had power cables plugged in. The next day we ran all the network cables and powered them on.
We also designed the cabinet layout in visio, built a temperature monitoring system so we could make sure we weren’t cooking anything, figured out how to get the networks to live in both places, and for final approval for the move weekends. One on 2.29 and one the following week.
Week 3
We virtualized more systems over the weekend, figured out power was harder than we thought and found a few more cabinets that we could move to our lab area if needed. They will need rack mount UPS’es if we do that since there isn’t redundant power in the labs. The first third of our cabinet showed up and some custom length power cords. 18” is the right length. 8’ is too long.
We also built our ESX environment on new hardware in Andover. This will allow us to move the virtual machines nearly live. We also started testing our backup system to make sure we could import the catalog in case we needed to restore old tapes.
We did figure out that where we planned to place cabinets was also where the permanent AC unit was going so we had to redo the design. It all still fit, but required a bit of work.
The backup gear and the remainder of the cabinets arrived and we started placing them, installing network switches and running fiber.
We got our 4 portable AC’s and after running some extension cords got them powered up and cooling. We also still were removing old systems and pulled another 15 out, reducing power draw by 30 amps.
Many of the bus ducts were fed from a 200Amp circuit but much of the power we used came from a 400Amp circuit so we had to run extension cords across the various rooms to get power were it was needed.
Move 1 started Friday at 2.
We had figured out which systems could move. Anything critical or attached to the fiber channel SAN, had to wait. Anything else was fair game. We even broke redundancy of some systems to even out the load and reduce the risk for the more critical move.
We started breaking down at 3PM with the admins on the phone.
They would tell us when the system was down, we’d run over pull it out and put it on the pile. The pile was decided based on where it was going, not where it came from. We had a worksheet for each one that we would use to keep us on track.
Saturday AM team 2 started racking servers and by the time team 1 got back on site we could start network cabling. By Saturday night we had brought up some of the systems and Sunday we were able to finish it all up.
We reviewed on Monday afternoon what we did well and what didn’t go well. When we looked at power we knew there was no way we had enough so we bit the bullet and moved the 2 racks of engineering servers to the lab.
Moving these 35 servers from the MDF to the lab freed up 50 amps of power. They went to an area of the lab with very little equipment running so power and cooling was not an issue. We had to run fiber and an N1 to provide connectivity.
Friday night we broke down everything and got it on the truck and then followed the truck from Boston to Andover and unloaded the pallets. Then we went to bed and the next team started racking the servers in the cabinets. When we got back up around 8 and got in the office we stared cabling. By the end of Saturday all the servers were pretty much ready to turn on. We brought up some of the base infrastructure but left most of it off until Sunday AM.
Driving in Sunday and seeing a power truck at the end of the road was not how I wanted to start the day. Luckily almost all of the machines came up, in the right order, and with no major problems. One or two drives needed to be reseated and a power supply changed, but nothing catastrophic. By noon we were largely done testing and feeling pretty good
At the end of it we moved around 150 servers, 160 fiber, 700 copper, and 260 power cords and added 80 backups to a new backup environment
OK so what did we learn?
Tip 1. Plan everything. We have a guy that does these crazy detailed plans. This was hit whiteboard for the first move. Literally he would have a schedule that says “From 7:45 to 8:37, you will be cabling in cabinet r1c3. “ Really down to the minute. And of course 5 minutes in the plan was already off track but it allowed us to space people out and have a base to start from
Tip #2, absolutely have to have perfect documentation. We helped one customer move their data center and when we asked about their docs they said it was good, probably 90% right… We spent 3 weeks helping them get it to 100% correct. Literlally using a paperclip on the cable to make sure we didn’t get anything wrong.
A few other parts to this tip. If everyone has a different copy of the docs, throw all of them out and start over.
One last bit on this, the docs can’t be online, at least not on the servers you are moving…
We were really good about communicating our progress through the project. This kept almost everyone aware of what was going on, almost. We did have one VP from south America call us just after we turned off the ERP system because he needed to process a hot order. Luckily we were able to power it back up and it didn’t delay us too long, but we also knew if we didn’t bring it back we were covered. We also during the moves we had everyone dial into a conference call and stay muted so if we needed someone we could just ask for them. Also it kept everyone tuned in to what was going on.
Tip 3b. Really make sure you are muted when going to the bathroom… different story
Tip 4, remember people are people. They don’t like cold temps, wind, or noise, so if you can turn stuff off good.
They also need to be fed regularly and stop for sleep.
We did one data center move where we worked 38 hours straight. Everyone was completely useless half way through it. Now we do teams. One team breaks down then sleeps for 6-8 hours while team 2 racks and cables. We also try to do a separate team for testing and troubleshooting but we’re a small team and that rarely happens as well as we would like.
All the teams are team members. Many times other departments, contractors or subcontractors will help. Treat them as if they were your own employees. The success of your project depends on them too.
Tip 5, sort of obvious but make sure you have what you need. Also why it is a good practice to do a test move early. Finding a roll of tape, or carts, or screwdrivers tuesday afternoon is easy, Saturday at 3Am not so much… Some of the things we forgot, cable ties, tools, carts, pallet jack, tape, paper (yes including toilet paper)
Go over the process ahead of time. A walkthrough the day before and again an hour before will make sure people don’t get confused when they start putting servers back where they came from.
Shutdown and startup order is important. If you try to bring up servers before the domain, or some applications before the database servers, you can cause issues. If there is an order, make sure the people racking and cabling the servers leave them off until they are ready to come up.
Make sure everyone understands the port numbering. If the switches go 1-24 on the top row and some people think it is odd on the top and even on the bottom, you will have problems.
Same goes for power. We had some admins thing A power was left and B power was on the right, but some cabinets had 4 PDUS’s and some servers got plugged in with the thinking that front was A and back was B and when we did a power test and turned off B power, half of the servers went down. Luckily one of our admins was smart enough to point out that if a server didn’t lose either power supply that you were still wrong since you should have lost one and you plugged both supplies into A
Probably one of the most important slides.
A team of two will take 5 minutes to rack a server in the cabinet, 5 minutes per Ethernet cable and 2 minutes per power cable.
Our last move because we didn’t want to have to buy new cabinets and network gear the total down time was going to be 67 hours, so we would shutdown at noon on Friday and be back up Monday 8am. Everyone seemed good with this, until 2 weeks before when suddenly that was too long. Luckily we have moved enough that no one questioned the time and instead we just agreed to buy new cabinets and network gear and we were back up Saturday night instead. Once we walked them through the math it was an easy discussion.
Always have a plan B and plan C. When we moved our data center the building we were moving to had these elevators that would pretty regularly stop working and could be down for 6 hours or more while we got the repair guys in. We actually had a backup plan where we would put the servers on carts, and then have teams that would carry them one by one up to the third floor. In our plan we calculated how much more time this would take and had people on standby just in case we needed people to be “runners”.
Some of this may sounds silly but as much as you can do prior to the shutdown you should do. Even things like unwrapping the patch cables and taking the ties off of them can save a lot of time. 30 seconds per cable doesn’t sound like much, but when there are 360 of them that’s almost 3 hours’ worth of time. Interestingly enough labeling the cables ahead of time is a horrible idea. It seems like a really good idea but inevitably they will get mixed up and you will spend much more time trying to find the right cable then if you just label it then.. having the labels printed helps.
Also if you can reboot the servers a week early this helps in case you have applied patches and forgotten to reboot. Suddenly problems that have nothing to do with the move will get caught early and you won’t run down a rat hole thinking it has anything to do with the move, when it doesn’t.
Develop a priority list ahead of time. When you get to the end and stuff isn’t all working is not when you want someone deciding what’s more important. It’s likely that who ever is making that call is already tired and probably going to make a stupid decision. Avoid that, plan early on what can wait and what needs to be running.
Plan time to test and troubleshoot. We actually built a quick dirty little script that just did a ping of all the servers before the move, and showed what was still up or still down so at a glance we could tell how much was left to do. It was a .bat file (for those that remember those) since we wanted it to run when nothing else was online. It even pinged by IP not hostnames since DNS may not be running.
Now we also had a full test plan including placing orders, releasing shipments, printing reports etc. Testing and the corresponding troubleshooting always takes longer than you expect but the more testing you do before, the less problems you will have on Monday. When we finally moved into our “permanent” home the CFO didn’t believe we actually moved because everything was working on Monday.
Finally after you are done, review and recognize what you accomplished. We track every issue that we had and the resolution, how much time it took for each step, which is why we know how many minutes it takes for an Ethernet cable. When we complete our moves we send out a summary email to the company on the amount of stuff we moved and issues we found before we unleashed the users on the systems. One of the things I;ve found is when users know the effort that goes into these moves, the more understanding they are when things go wrong.
We actually use Chatter which is sort of an internal facebook app for informal communications and during the move will chat photos and updates to the company. It’s a SAAS application from salesforce so it works even when we are moving our data center…. And it’s paid dividends with our user community. In fact we have even had departments bring us donuts on Monday after the moves to show their appreciation.
One last tip I’ve learned. If you are the manager for the project or team, be on site and involved in the move, not in the way, and don’t try to help, but get coffee, food, snacks, coil up the old cables, sweep the floor, etc.
Let’s be honest, you probably aren’t that much help, but having you there emphasizes how important the project is. Besides, if it goes really badly at least you know you can sleep in late on Monday. You’re probably going to get fired anyway, might as well be rested.