In this White Paper the objective is to provide a somewhat detailed dialog regarding technology refresh for client computing. Unlike previous technology refresh cycles, this particular cycle is occurring in the context of other significant trends in client computing.
This presentation was used to obtain funding to refresh 50% of the desktop inventory (~1100 units) in FY11 and obtain support for increasing utilization of existing assets through improved computer lab scheduling.
When does it make sense to upgrade to more efficient servers? Most data centers operate on a 3 to 5 year tech refresh cycle. Is this really the best way to decide when to refresh old equipment? By continuously monitoring the cost to run older equipment, you can determine when you have hit the break-even point with your existing servers. Join Viridity co-founder and CTO, Mike Rowan, as he reviews best practices for technology refresh.
The document describes a server refresh program that was previously disorganized and tracked via Excel spreadsheets. It outlines how the program applied lean, kanban and theory of constraints (TOC) principles to improve organization and throughput. Key changes included defining different server migration paths to reduce bottleneck meetings with architects, using a kanban board to visually track server progress, establishing work-in-progress limits, and ensuring a clear definition of completion by removing servers from support contracts. This helped increase throughput by 300% while standardizing processes and documentation.
The document discusses disaster recovery planning and strategies. It notes that over 50% of organizations develop IT disaster recovery plans, but fewer than one-third actually test those plans. It also discusses how budget constraints are leading organizations to consider alternatives like server virtualization and cloud computing to reduce costs associated with disaster recovery. The document provides an overview of different aspects of developing a disaster recovery plan, including defining recovery time objectives, prioritizing applications, replicating data, and assessing risks. It emphasizes the importance of involving business needs in planning and testing plans to ensure the ability to recover critical business functions.
The speed and productivity benefits of high-performance cloud computing are well documented. For numerically large engineering simulations, a flexible cloud environment typically delivers faster run times, allowing engineers to solve complex problems quickly ― and launch products more rapidly. The world's leading product development teams are already leveraging high-performance computing (HPC) resources, yet many of them remain uncertain about the costs of replacing on-premises hardware and software with cloud hosting.
To clear up the confusion and demonstrate that the cloud delivers a total cost of ownership that is lower than on-premises computing, Ansys has published “A Break in the Clouds: The Cost Benefits of Ansys Cloud.” The white paper illustrates how Ansys Cloud delivers all the speed and efficiency that customers expect from HPC in the cloud ― along with Ansys’ software ― at a cost lower than an on-premises approach.
Technical Debt, Unplanned Work and the Toyota WayHans Nygaard
Technical debt is a metaphor used in software development that refers to the implied cost of additional rework caused by taking shortcuts or using limited solutions rather than better approaches that would take more time. This concept can also apply more broadly to IT assets like applications, technologies, platforms and hardware. Technical debt is incurred from the beginning through quick solutions but also grows over time as requirements and technologies change. Ignoring technical debt can lead to an increasing amount of unplanned work that disrupts planned projects and consumes an IT department's resources. Managing technical debt involves assessing debt levels, choosing a payment strategy like replacing or updating assets, preventing new debt, sticking to plans, and tracking progress.
1) Business continuity planning (BCP) involves maintaining business operations during disruptions through alternative sites, data backups, and emergency plans. It is important for banks to mitigate risks from hardware failures, natural disasters, and other events.
2) A BCP has several phases including initiation, analysis of business impacts, plan design and development, implementation, testing, and maintenance. It may involve alternatives like cold sites for future expansion or hot sites that are immediately available.
3) Performing a business impact analysis identifies critical systems and functions and their tolerance for downtime. It assists in risk assessment and prioritization of recovery needs. Data centers are important IT assets that require redundancy, reliability, security and environmental controls to ensure
This presentation was used to obtain funding to refresh 50% of the desktop inventory (~1100 units) in FY11 and obtain support for increasing utilization of existing assets through improved computer lab scheduling.
When does it make sense to upgrade to more efficient servers? Most data centers operate on a 3 to 5 year tech refresh cycle. Is this really the best way to decide when to refresh old equipment? By continuously monitoring the cost to run older equipment, you can determine when you have hit the break-even point with your existing servers. Join Viridity co-founder and CTO, Mike Rowan, as he reviews best practices for technology refresh.
The document describes a server refresh program that was previously disorganized and tracked via Excel spreadsheets. It outlines how the program applied lean, kanban and theory of constraints (TOC) principles to improve organization and throughput. Key changes included defining different server migration paths to reduce bottleneck meetings with architects, using a kanban board to visually track server progress, establishing work-in-progress limits, and ensuring a clear definition of completion by removing servers from support contracts. This helped increase throughput by 300% while standardizing processes and documentation.
The document discusses disaster recovery planning and strategies. It notes that over 50% of organizations develop IT disaster recovery plans, but fewer than one-third actually test those plans. It also discusses how budget constraints are leading organizations to consider alternatives like server virtualization and cloud computing to reduce costs associated with disaster recovery. The document provides an overview of different aspects of developing a disaster recovery plan, including defining recovery time objectives, prioritizing applications, replicating data, and assessing risks. It emphasizes the importance of involving business needs in planning and testing plans to ensure the ability to recover critical business functions.
The speed and productivity benefits of high-performance cloud computing are well documented. For numerically large engineering simulations, a flexible cloud environment typically delivers faster run times, allowing engineers to solve complex problems quickly ― and launch products more rapidly. The world's leading product development teams are already leveraging high-performance computing (HPC) resources, yet many of them remain uncertain about the costs of replacing on-premises hardware and software with cloud hosting.
To clear up the confusion and demonstrate that the cloud delivers a total cost of ownership that is lower than on-premises computing, Ansys has published “A Break in the Clouds: The Cost Benefits of Ansys Cloud.” The white paper illustrates how Ansys Cloud delivers all the speed and efficiency that customers expect from HPC in the cloud ― along with Ansys’ software ― at a cost lower than an on-premises approach.
Technical Debt, Unplanned Work and the Toyota WayHans Nygaard
Technical debt is a metaphor used in software development that refers to the implied cost of additional rework caused by taking shortcuts or using limited solutions rather than better approaches that would take more time. This concept can also apply more broadly to IT assets like applications, technologies, platforms and hardware. Technical debt is incurred from the beginning through quick solutions but also grows over time as requirements and technologies change. Ignoring technical debt can lead to an increasing amount of unplanned work that disrupts planned projects and consumes an IT department's resources. Managing technical debt involves assessing debt levels, choosing a payment strategy like replacing or updating assets, preventing new debt, sticking to plans, and tracking progress.
1) Business continuity planning (BCP) involves maintaining business operations during disruptions through alternative sites, data backups, and emergency plans. It is important for banks to mitigate risks from hardware failures, natural disasters, and other events.
2) A BCP has several phases including initiation, analysis of business impacts, plan design and development, implementation, testing, and maintenance. It may involve alternatives like cold sites for future expansion or hot sites that are immediately available.
3) Performing a business impact analysis identifies critical systems and functions and their tolerance for downtime. It assists in risk assessment and prioritization of recovery needs. Data centers are important IT assets that require redundancy, reliability, security and environmental controls to ensure
This document discusses data center consolidation as a key strategy for IT cost cutting. It notes that 69% of IT costs come from operations and maintenance of existing systems, and that consolidating data centers can reduce these costs by decreasing the number of data centers, servers, software licenses, and power usage. The document recommends migrating to virtualized hardware and cloud platforms as part of consolidation efforts to further reduce costs, while also implementing strategic disaster recovery functionality. It emphasizes planning application migrations carefully to avoid issues that could extend timelines or budgets.
In 2011, Uptime Institute conducted its first annual data center industry survey collecting data from mission critical data center owners and operators worldwide. This review continues in this spirit and adds in additional industry hot topics.
This document discusses Kanban and how it can be applied to software development projects. Kanban originated in the Toyota Production System and uses visual cards to support non-centralized production control. In Agile software development, visualizing tasks on boards is sometimes called "Software Kanban". The document outlines Lean principles from Toyota that can eliminate waste in software projects. It provides examples of how Kanban boards can be used to visualize tasks, features, burndowns, and schedules. Kanban aims to reduce work-in-progress, balance capacity against demand, and prioritize tasks.
The Power of 3 - IBM PureApplications, SoftLayer and General Operational Eff...Prolifics
1. The document discusses how organizations can reduce IT spending by simplifying, automating, and adapting their operations. It states that currently only 35% of IT budgets go towards new projects while 65% is spent on maintaining existing infrastructure.
2. Simplifying operations involves using standardized patterns and platforms like IBM PureApplication and SoftLayer to reduce complexity. Automating processes such as testing, deployment and configuration can reduce costs and errors.
3. Adapting culture and processes, for example using DevOps and continuous delivery practices, can help break down barriers and improve efficiency across the entire IT lifecycle. The overall goal is to free up more of the IT budget for new projects that drive business innovation.
20th March Session Three by Ganesan ArumugamSharath Kumar
The document discusses how virtualization can help organizations save energy through reducing costs and improving efficiency. It notes that most IT budgets are spent on maintaining existing infrastructure rather than innovation. Virtualization allows consolidation of servers which can reduce capital and operational costs by 50-60% and 25% respectively. It also decreases administrative time by around 33% and data center energy costs by up to 80%. This saves financial, human, and environmental energy which can then be reinvested in innovation. VMware is presented as the proven industry leader in virtualization with over 170,000 customers including most Fortune 500 companies.
BigFix helped Chichester School District simplify IT operations and reduce costs by providing:
1) Real-time visibility and control over patching 2,000 PCs and 50 servers across six sites, reducing manual troubleshooting.
2) Automated power management that reduced energy costs by 70% per PC while ensuring machines could be remotely patched.
3) A 99% first pass patch success rate compared to 50% previously, minimizing manual work to resolve issues.
Data Center Transformation Program Planning and DesignJoseph Schwartz
The document summarizes Sunnyside Associates' data center transformation program planning and design service offering. It begins by outlining why data center transformation and cloud computing projects often fail, such as poor planning, lack of leadership support, and unrealistic timelines. It then describes Sunnyside's solution, which includes establishing a clear program vision based on the client's needs, using industry best practices, and taking a phased approach involving opportunity identification, performance benchmarking, due diligence, and governance planning. The document concludes by stating the criteria for program success, such as well-defined governance roles and accountability, and calls the reader to contact Sunnyside for more information.
Unicom Conference - Delhi, July 2012. Data Center Automation by creating Center Of Excellence. How to radically improve Data Center efficiency? How to go about proceeding with Automation. Is it easy for Automation projects to succeed? The process, people, and technology of Automation explained in brief.
This document discusses how Eaton provides resilient and efficient IT infrastructure solutions powered by modern IT solutions. It highlights how integrated power management solutions from Eaton allow organizations to view and manage their entire power system from virtualization dashboards, keep critical loads running longer during outages, and ensure maximum business continuity. It also emphasizes how distributed control architectures from Eaton are inherently safer and enable safe auto-adaptation to changing load and power conditions.
1. The document discusses strategies for transitioning from a complex legacy infrastructure to a virtual and simple infrastructure using thin client computing.
2. It outlines considerations for backup, disaster recovery, business continuity, security, and other areas when implementing thin client solutions.
3. The final section discusses developing a long term strategic plan, calculating ROI for projects, prioritizing initiatives, and creating a roadmap for execution.
The document discusses options for implementing green IT initiatives to reduce energy consumption from desktop computing. It analyzes replacing CRT monitors with LCDs, replacing desktops with thin clients or laptops, raising user awareness, deploying power policies using group policies, and using an intelligent software-based solution called infraSECURE. InfraSECURE provides the benefits of 50-66k annual savings per 1000 desktops without dependencies on user behavior, maintains productivity, requires minimal investment and effort with ROI in less than 6 months, and generates no e-waste. It is concluded to be the ideal green IT solution.
The Five Myths of Cloud-Based Disaster Recovery Axcient
A recent study by the Disaster Recovery Preparedness Council highlighted a disturbing fact: three out of four companies worldwide are failing in terms of disaster readiness. The impact IT interruptions have range from thousands of dollars to millions of dollars.
No wonder the “cloud” is having such an attraction and why businesses are rethinking their recovery strategies. However, what is preventing companies from fully adopting cloud solutions for disaster recovery? We believe that there are five myths surrounding cloud-based recovery services that have to be dispelled.
DCIM Software Five Years Later: What I Wish I Had Known When I Started (Case ...Sunbird DCIM
Steve Lancaster from Chevron presented on his experience implementing a DCIM (data center infrastructure management) solution over five years. He discussed how DCIM helped him achieve goals like asset management, aligning space and power usage, and inventory reports. Lancaster highlighted collaborating with the vendor as critical. He wished he had better understood all of DCIM's capabilities upfront and set up power monitoring. Looking ahead, Lancaster wants to utilize DCIM for capacity planning, run power failure scenarios, and streamline processes.
This document discusses best practices for data centers, covering hardware performance, high availability (HA), capacity planning, and security. It addresses choosing the right hardware, benchmark testing, HA considerations like power and cooling, clustering, disaster recovery, capacity planning factors like current and future utilization, and security risks and mitigations like physical security, patching, and data backup. The goal is to provide guidance on optimizing data center operations and ensuring continuous service availability.
The document discusses how companies are leveraging cloud computing solutions. It provides examples of how IBM has helped various organizations adopt cloud technologies to improve collaboration, access to resources, and IT efficiencies. Key benefits mentioned include reduced costs, improved flexibility, scalability and security. The document also outlines factors to consider when evaluating cloud solutions and ROI areas to investigate.
White Paper - P2 Energy - Back Office ChallengesLeo Champion
The document discusses four major back-office challenges for oil and gas companies: heavy reliance on spreadsheets, lack of real-time reporting, inefficient workflow processes, and training issues. It argues that a cloud-based integrated software solution can help address these challenges by automating processes, providing real-time access to consolidated data, streamlining workflows, and offering scalable training resources. Implementing such a solution could help oil and gas companies improve accounting functions and focus more on their core business activities.
The document discusses managing and mitigating risk in businesses. It outlines an evolving risk landscape with new technologies, data growth, and regulatory compliance challenges. Different types of risks are described, from frequent low impact issues to infrequent high impact disasters. Key success factors for managing risk include lowering costs, ensuring compliance, protecting data and applications, and securing the data center. IBM is positioned as being able to help businesses fuel innovation, secure data, meet compliance, and secure their data centers from threats to ensure productivity and reputation.
In this presentation we will be discussing the business benefits for data centre power and environmental monitoring and practical steps you can take to reduce risk and increase efficiency. Richard May bio.: Richard May is the Data Centre Power SME and Country Manager for Raritan UKI and Nordics. With over 17 years’ data centre experience, specialising in rack monitoring, metering and control, Richard works to support Raritan customers and partners; helping to maximise the efficiency of their existing data centres, and developing strategies for their new facilities.
Why Replication is Not Enough to Keep Your Business Running Axcient
While you may be familiar with multiple replication products and vendors, don’t confuse the technology of data or server replication with Disaster Recovery.
Replication is not a disaster recovery solution nor does it provide business continuity. So what exactly is replication? According to TechTarget, replication is the process of copying data from one location to another over a SAN, LAN or local WAN. This provides you with multiple up-to-date copies of your data. Look at replication as an aspect of DR/BC. Although it is a key technology in order to implement a complete DR/BC plan, it needs to be combined with data deduplication, virtual servers or even the cloud. But let’s take a step back to really understand business continuity.
The Top 7 Considerations When Comparing Cloud vs. Premise-Based Contact CentersTeleTech
Calculating the true total cost of ownership (TCO) of contact center platforms requires a close examination of multiple IT and operational differences. Here are seven aspects to consider.
This document provides a four-step approach for organizations to transition from a legacy high availability and disaster recovery solution to an always-on platform: 1) Assess and evaluate current processes, applications, and availability requirements to identify gaps; 2) Plan and design the architecture and roadmap by applying guiding principles and considering technology, processes, people, and applications; 3) Implement and test the strategy to ensure services are meeting objectives; 4) Manage and sustain the platform through ongoing monitoring, risk response, compliance management, and performance reporting while reassessing regularly.
This document discusses the importance of software modernization for companies still relying on legacy systems. It defines legacy software as older systems that are difficult to modify and maintain. While costly, software modernization is necessary to keep up with changing technology, ensure system stability, and reduce maintenance costs. The document recommends companies first assess their legacy systems to understand the risks of maintaining the status quo versus upgrading. Based on this assessment, companies can then develop a plan and deadline to modernize their systems incrementally in a controlled manner.
This document discusses data center consolidation as a key strategy for IT cost cutting. It notes that 69% of IT costs come from operations and maintenance of existing systems, and that consolidating data centers can reduce these costs by decreasing the number of data centers, servers, software licenses, and power usage. The document recommends migrating to virtualized hardware and cloud platforms as part of consolidation efforts to further reduce costs, while also implementing strategic disaster recovery functionality. It emphasizes planning application migrations carefully to avoid issues that could extend timelines or budgets.
In 2011, Uptime Institute conducted its first annual data center industry survey collecting data from mission critical data center owners and operators worldwide. This review continues in this spirit and adds in additional industry hot topics.
This document discusses Kanban and how it can be applied to software development projects. Kanban originated in the Toyota Production System and uses visual cards to support non-centralized production control. In Agile software development, visualizing tasks on boards is sometimes called "Software Kanban". The document outlines Lean principles from Toyota that can eliminate waste in software projects. It provides examples of how Kanban boards can be used to visualize tasks, features, burndowns, and schedules. Kanban aims to reduce work-in-progress, balance capacity against demand, and prioritize tasks.
The Power of 3 - IBM PureApplications, SoftLayer and General Operational Eff...Prolifics
1. The document discusses how organizations can reduce IT spending by simplifying, automating, and adapting their operations. It states that currently only 35% of IT budgets go towards new projects while 65% is spent on maintaining existing infrastructure.
2. Simplifying operations involves using standardized patterns and platforms like IBM PureApplication and SoftLayer to reduce complexity. Automating processes such as testing, deployment and configuration can reduce costs and errors.
3. Adapting culture and processes, for example using DevOps and continuous delivery practices, can help break down barriers and improve efficiency across the entire IT lifecycle. The overall goal is to free up more of the IT budget for new projects that drive business innovation.
20th March Session Three by Ganesan ArumugamSharath Kumar
The document discusses how virtualization can help organizations save energy through reducing costs and improving efficiency. It notes that most IT budgets are spent on maintaining existing infrastructure rather than innovation. Virtualization allows consolidation of servers which can reduce capital and operational costs by 50-60% and 25% respectively. It also decreases administrative time by around 33% and data center energy costs by up to 80%. This saves financial, human, and environmental energy which can then be reinvested in innovation. VMware is presented as the proven industry leader in virtualization with over 170,000 customers including most Fortune 500 companies.
BigFix helped Chichester School District simplify IT operations and reduce costs by providing:
1) Real-time visibility and control over patching 2,000 PCs and 50 servers across six sites, reducing manual troubleshooting.
2) Automated power management that reduced energy costs by 70% per PC while ensuring machines could be remotely patched.
3) A 99% first pass patch success rate compared to 50% previously, minimizing manual work to resolve issues.
Data Center Transformation Program Planning and DesignJoseph Schwartz
The document summarizes Sunnyside Associates' data center transformation program planning and design service offering. It begins by outlining why data center transformation and cloud computing projects often fail, such as poor planning, lack of leadership support, and unrealistic timelines. It then describes Sunnyside's solution, which includes establishing a clear program vision based on the client's needs, using industry best practices, and taking a phased approach involving opportunity identification, performance benchmarking, due diligence, and governance planning. The document concludes by stating the criteria for program success, such as well-defined governance roles and accountability, and calls the reader to contact Sunnyside for more information.
Unicom Conference - Delhi, July 2012. Data Center Automation by creating Center Of Excellence. How to radically improve Data Center efficiency? How to go about proceeding with Automation. Is it easy for Automation projects to succeed? The process, people, and technology of Automation explained in brief.
This document discusses how Eaton provides resilient and efficient IT infrastructure solutions powered by modern IT solutions. It highlights how integrated power management solutions from Eaton allow organizations to view and manage their entire power system from virtualization dashboards, keep critical loads running longer during outages, and ensure maximum business continuity. It also emphasizes how distributed control architectures from Eaton are inherently safer and enable safe auto-adaptation to changing load and power conditions.
1. The document discusses strategies for transitioning from a complex legacy infrastructure to a virtual and simple infrastructure using thin client computing.
2. It outlines considerations for backup, disaster recovery, business continuity, security, and other areas when implementing thin client solutions.
3. The final section discusses developing a long term strategic plan, calculating ROI for projects, prioritizing initiatives, and creating a roadmap for execution.
The document discusses options for implementing green IT initiatives to reduce energy consumption from desktop computing. It analyzes replacing CRT monitors with LCDs, replacing desktops with thin clients or laptops, raising user awareness, deploying power policies using group policies, and using an intelligent software-based solution called infraSECURE. InfraSECURE provides the benefits of 50-66k annual savings per 1000 desktops without dependencies on user behavior, maintains productivity, requires minimal investment and effort with ROI in less than 6 months, and generates no e-waste. It is concluded to be the ideal green IT solution.
The Five Myths of Cloud-Based Disaster Recovery Axcient
A recent study by the Disaster Recovery Preparedness Council highlighted a disturbing fact: three out of four companies worldwide are failing in terms of disaster readiness. The impact IT interruptions have range from thousands of dollars to millions of dollars.
No wonder the “cloud” is having such an attraction and why businesses are rethinking their recovery strategies. However, what is preventing companies from fully adopting cloud solutions for disaster recovery? We believe that there are five myths surrounding cloud-based recovery services that have to be dispelled.
DCIM Software Five Years Later: What I Wish I Had Known When I Started (Case ...Sunbird DCIM
Steve Lancaster from Chevron presented on his experience implementing a DCIM (data center infrastructure management) solution over five years. He discussed how DCIM helped him achieve goals like asset management, aligning space and power usage, and inventory reports. Lancaster highlighted collaborating with the vendor as critical. He wished he had better understood all of DCIM's capabilities upfront and set up power monitoring. Looking ahead, Lancaster wants to utilize DCIM for capacity planning, run power failure scenarios, and streamline processes.
This document discusses best practices for data centers, covering hardware performance, high availability (HA), capacity planning, and security. It addresses choosing the right hardware, benchmark testing, HA considerations like power and cooling, clustering, disaster recovery, capacity planning factors like current and future utilization, and security risks and mitigations like physical security, patching, and data backup. The goal is to provide guidance on optimizing data center operations and ensuring continuous service availability.
The document discusses how companies are leveraging cloud computing solutions. It provides examples of how IBM has helped various organizations adopt cloud technologies to improve collaboration, access to resources, and IT efficiencies. Key benefits mentioned include reduced costs, improved flexibility, scalability and security. The document also outlines factors to consider when evaluating cloud solutions and ROI areas to investigate.
White Paper - P2 Energy - Back Office ChallengesLeo Champion
The document discusses four major back-office challenges for oil and gas companies: heavy reliance on spreadsheets, lack of real-time reporting, inefficient workflow processes, and training issues. It argues that a cloud-based integrated software solution can help address these challenges by automating processes, providing real-time access to consolidated data, streamlining workflows, and offering scalable training resources. Implementing such a solution could help oil and gas companies improve accounting functions and focus more on their core business activities.
The document discusses managing and mitigating risk in businesses. It outlines an evolving risk landscape with new technologies, data growth, and regulatory compliance challenges. Different types of risks are described, from frequent low impact issues to infrequent high impact disasters. Key success factors for managing risk include lowering costs, ensuring compliance, protecting data and applications, and securing the data center. IBM is positioned as being able to help businesses fuel innovation, secure data, meet compliance, and secure their data centers from threats to ensure productivity and reputation.
In this presentation we will be discussing the business benefits for data centre power and environmental monitoring and practical steps you can take to reduce risk and increase efficiency. Richard May bio.: Richard May is the Data Centre Power SME and Country Manager for Raritan UKI and Nordics. With over 17 years’ data centre experience, specialising in rack monitoring, metering and control, Richard works to support Raritan customers and partners; helping to maximise the efficiency of their existing data centres, and developing strategies for their new facilities.
Why Replication is Not Enough to Keep Your Business Running Axcient
While you may be familiar with multiple replication products and vendors, don’t confuse the technology of data or server replication with Disaster Recovery.
Replication is not a disaster recovery solution nor does it provide business continuity. So what exactly is replication? According to TechTarget, replication is the process of copying data from one location to another over a SAN, LAN or local WAN. This provides you with multiple up-to-date copies of your data. Look at replication as an aspect of DR/BC. Although it is a key technology in order to implement a complete DR/BC plan, it needs to be combined with data deduplication, virtual servers or even the cloud. But let’s take a step back to really understand business continuity.
The Top 7 Considerations When Comparing Cloud vs. Premise-Based Contact CentersTeleTech
Calculating the true total cost of ownership (TCO) of contact center platforms requires a close examination of multiple IT and operational differences. Here are seven aspects to consider.
This document provides a four-step approach for organizations to transition from a legacy high availability and disaster recovery solution to an always-on platform: 1) Assess and evaluate current processes, applications, and availability requirements to identify gaps; 2) Plan and design the architecture and roadmap by applying guiding principles and considering technology, processes, people, and applications; 3) Implement and test the strategy to ensure services are meeting objectives; 4) Manage and sustain the platform through ongoing monitoring, risk response, compliance management, and performance reporting while reassessing regularly.
This document discusses the importance of software modernization for companies still relying on legacy systems. It defines legacy software as older systems that are difficult to modify and maintain. While costly, software modernization is necessary to keep up with changing technology, ensure system stability, and reduce maintenance costs. The document recommends companies first assess their legacy systems to understand the risks of maintaining the status quo versus upgrading. Based on this assessment, companies can then develop a plan and deadline to modernize their systems incrementally in a controlled manner.
The document discusses the benefits of converged systems over traditional siloed IT infrastructures. It outlines key challenges with complexity in today's IT environments and how converged systems provide advantages like reduced costs, faster deployment times, and improved performance and availability. The summary highlights that Hitachi Data Systems provides converged infrastructure solutions called Unified Compute Platforms that integrate servers, storage, networking and software to optimize support for mission-critical applications.
This document discusses the importance of having a robust IT technical support strategy. As businesses become more reliant on integrated IT systems, downtime can have far-reaching impacts across an organization. The costs of downtime have increased significantly in recent years. The document recommends taking a holistic view of technical support using a framework that considers people, processes, and technology. It also advises conducting an assessment of the current support structure to identify areas for improvement and prioritization. The overall message is that proactively managing technical support can help businesses optimize costs and mitigate risks from downtime in today's complex IT environments.
This document discusses the importance of having a robust technical support strategy to mitigate the risks and costs of downtime. It begins by outlining how downtime can negatively impact organizations through a "ripple effect" as business processes have become increasingly dependent on integrated IT systems. It then presents IBM's framework for a comprehensive technical support strategy covering people, processes, and technology. The document advocates conducting an assessment of an organization's current support maturity level and developing a roadmap to prioritize improvements. Finally, it argues that a managed support solution through a third party can help optimize support more cost-effectively across an organization's entire IT environment.
Aitp presentation ed holub - october 23 2010AITPHouston
This presentation from Gartner discusses 10 top IT infrastructure and operations trends for organizations to watch. The trends covered include virtualization, big data, energy efficiency, unified communications, staff retention, social networks, legacy migrations, compute density, cloud computing, and converged fabrics. For each trend, the presentation provides details on how the trend affects organizations and recommendations on how to prepare and respond. The overall message is that IT leaders need to be aware of these emerging trends and develop strategies to leverage and adapt to them.
4 Ways No-Code Platforms can help IT teams in Manufacturing Industry.pptxArpitGautam20
Here are 4 exciting benefits that No Code Platforms are bringing to the table for IT teams in the manufacturing industry. Read on to know more! https://natifi.ai/4-ways-no-code-platforms-can-help-it-teams-in-manufacturing-industry/
The cumulative effect of decades of IT infrastructure investment around a diverse set of technologies and processes has stifled innovation at organizations around the globe. Layer upon layer of complexity to accommodate a staggering array of applications has created hardened processes that make changes to systems difficult and cumbersome.
Virtualization infrastructure in financial services rully feranataRully Feranata
Over many years, the IT function in financial institutions has evolved from a mere transactional tool into
a pervasive, integral element of virtually every aspect of doing business. This transformation has
constituted a fundamental, structural change in the financial services arena and has put IT performance
at the top of the CEO’s agenda at most banks and insurance companies.
The document discusses the challenges facing banks in modernizing their technology systems. It notes that banks have historically focused on rapid growth and innovation over efficiency, resulting in thousands of fragmented systems. It proposes that banks undergo an "industrialization" process to simplify their technology and business processes. This involves defining core capabilities, processes, and data assets and organizing people and technology to better support standardized processes. The document provides several recommendations for how banks can initiate this change, such as prioritizing data management, adopting service-oriented architectures, and leveraging cloud computing technologies to reduce costs. The goal is for banks to develop a "solid technical core" that is lean, integrated and operates with predictability and efficiency.
Streamline your digital transformation for a future ready venture.LCDF
Streamline your digital transformation for a future ready venture. How the Pandemic How the pandemic impacted DACH Industries the unexpected catalyst for digital Transformation ?
IEEE 2013 The flaws in the traditional contract for software developmentSusan Atkinson
We believe that the traditional contract model for software development is largely responsible for failures in IT projects. In this article we explain how in any IT project the contract model increases the risk of failure, and leads to a sub-optimal design and poor return on investment. We also put forward our proposals for an alternative approach based on the principles of complexity theory.
Digital Engineering: Top 5 Imperatives for Communications, Media and Technolo...Cognizant
Many communications, media and technology companies share similar digital objectives. Here are our recommendations for realizing five common digital goals, and a look at a few companies that have succeeded with meeting them.
Why IT Struggles With Digital Transformation and What to Do About Itrun_frictionless
To win the digital transformation race, successful CIOs need to overcome three immense challenges: Massive backlogs, legacy debt and scarce resources. And, at the same time they need to embrace new methods, better suited to fast-paced innovation.
www.runfrictionless.com
Secure, Strengthen, Automate, and Scale Modern Workloads with Red Hat & NGINXNGINX, Inc.
Learn how to support your application delivery – no matter where you are on the journey from monolithic apps to microservices.
Join this webinar to learn:
- About important considerations around digital innovation in FSI
- How to leverage automation and Ansible to deliver apps faster
- About keys to delivering modern apps securely and reliably anywhere
- How OpenShift takes the complexity out of containers
https://www.nginx.com/resources/webinars/secure-strengthen-automate-scale-modern-workloads-with-red-hat-nginx/
1) Software maintenance costs have risen dramatically and now account for over 90% of total software costs, compared to around 50% a few decades ago. Understanding existing code, lack of documentation, volatile requirements, and dispersed code are some of the main cost drivers.
2) Ways to reduce maintenance costs include (re)documentation, which can save up to 12% of total costs, eliminating dead code which accounts for up to 30% of code size, and reducing cloned code which requires duplicate maintenance efforts.
3) Proper documentation, eliminating unused code, and refactoring duplicated code can significantly reduce the growing costs of software maintenance.
A model demonstrating why SaaS is the best option for banks when accessing technology. Credit Risk systems are key interface points for bankers and an ideal case study. Banks have long ago realised owning property is not a good use of capital, and the logic is more compelling for a fast depreciating asset like software.
CTRM - The Next Generation - ComTechAdvisory Vendor Technical UpdateCTRM Center
There is no doubt that technology has undergone a sea-change over the last decade or so potentially making it possible to build and deploy software faster and more cost-effectively while offering a host of features that help users to work smarter, faster and with less opportunity for error. Additionally, the way that applications are designed and built has also changed to take better advantage of these technologies. While arguably there is no single technology that facilitates a paradigm shift in Commodity Trading and Risk Management (CTRM) software, when you combine advances in all areas of solution development and deployment technology, then such a leap forward is both likely and desirable.
Nowhere is the gap between the possibilities offered by these leaps in technology and what is available as commercial solutions more apparent than in the commodity trading and risk management software category. There are many aging, legacy, solutions still being utilized, marketed, and deployed and yet, this is an industry that is experiencing unprecedented demands and change, which in turn, are placing increasing demands on the software it utilizes. What most commodity firms are seeking is more agile software platforms that can allow them to adapt and evolve through these changes. This growing demand is also accentuated by the younger, more tech-savvy people entering the business whose expectations are not being met by many existing solutions.
5 things needed to know migrating Windows Server 2003Kim Jensen
The document provides five key considerations for organizations migrating from Windows Server 2003:
1. The migration is an opportunity to align IT with business goals and modernize processes, not just a routine system update.
2. Conducting a thorough assessment of all hardware, software, and workflows is necessary to develop an accurate timeline and avoid missed dependencies.
3. Not all existing servers and applications need to be migrated - some may be decommissioned through consolidation or replacement with newer options.
4. Updating hardware in addition to software is important to take advantage of new capabilities and ensure performance supports business needs.
5. Most organizations will benefit from partnering with an outside expert to help plan and execute the migration effectively
Microsoft a doté Windows 8 de nombreuses fonctionnalités adaptées aux besoins des entreprises, de l'amélioration du workflow et de l'ergonomie, à de nouveaux outils de gestion, en passant par des améliorations en terme de sécurité. Voici dix fonctionnalités de Windows 8 qui pourraient vous aider à améliorer votre entreprise.
Die Vorstellung, alle IT-Geräte eines Unternehmens, einschließlich Desktop-PCs, Laptops und Tablets, auf ein neues Betriebssystem umzustellen, kann beängstigend sein. Glücklicherweise bietet HP eine vollständige Suite an Migrationsservices, die den Wechsel zu Windows 7 oder 8 schnell, einfach und sorgenfrei gestalten und die Downtime gering halten.
As Windows XP comes to the end of its natural life on 8 April 2014, thousands of applications that run under the old operating system will need to be upgraded for a move to Windows 7 or 8.
The majority of commercial applications are available in newer versions of the Windows operating system, and users simply need to stay current. However, some businesses may find that they are unable to do so for various reasons, perhaps because they have been developed in-house, or are one of the few commercial applications that don’t have an upgrade path to Windows 7 or 8.
In these instances, what are the options when it comes to applications that can’t be upgraded? How can businesses overcome issues associated with legacy apps when upgrading the rest of their applications to Windows 8?
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
2. White Paper | Closed Loop Lifecycle Planning
Table of contents
3 Executive Overview
The Economy
Megatrends
4 Governance
Closed Loop Lifecycle Planning Methodology
5 Hardware
Software
6 Staging and Integration
Moves, adds and changes
7 Warranty and maintenance
8 Help desk
Asset management
9 Project management
Management tools
Sustainability
10 Disposition
Refresh cycles
12 Service delivery strategies
13 The Print Fleet
14 The Display Fleet
The New IT Organization
15 The Global Aspect
3. White Paper | Closed Loop Lifecycle Planning
Technology Refresh Cycle –
Unique for 2013
Executive Overview
Let’s face it; a technology refresh cycle is not exactly the most exciting topic that IT prefers
to discuss.
We in IT have been addressing the refresh of desktops, notebook PCs and servers for some
time. Perhaps, however, that is the point. If this upcoming technology refresh cycle is similar
to your last refresh cycle, then maybe it is time to revisit what we in IT need to consider when
refreshing in this specific technology refresh cycle.
Many of the existing white papers regarding technology refresh cycles do not include the topics
of the megatrends and the impact on this 2013 technology refresh cycle.
In this White Paper the objective is to provide a somewhat detailed dialog regarding technology
refresh for client computing. Unlike previous technology refresh cycles, this particular cycle is
occurring in the context of other significant trends in client computing.
The Economy
Regardless of your point of view, it is undisputable that this is a tight economy. The Great
Recession will have a long lasting, lingering impact on the economy.
As a result of the tightening economy, many businesses delayed certain investments in IT.
Many of these investments included:
• Deferring Windows 7 until the last possible time (perhaps hoping that 4/2014 would be moved)
• Not investing in software maintenance
• Addressing IT staffing
• Extending the useful life of desktops and notebook PCs
• Contingent and non-FTE labor increased in leverage
In essence, we in IT were “hunkering down” to weather the economic storm, in many cases out
of necessity.
During this same time period, however, the IT marketplace did not remain status quo. The
consumer market in mobility literally exploded on the scene and consumer buying habits have
been fundamentally changed.
Expectations of the end user communities that IT supports, now has a different point of view
than before.
Megatrends
Most subject matter experts, third party consultants, and IT industry watchers, concur that
there are a number of game changing trends, referred to as megatrends, that will reshape It in
the next 3 to 5 year (if not sooner and more immediate).
3
8. White Paper | Closed Loop Lifecycle Planning
The list can go on but the point is that warranty is straightforward, while maintenance is not.
Think about maintenance in another manner, does the IT department really want as a core
competency to excel in supporting older technology?
This is another area where the megatrends might come into play. Technology refresh cycles may
well become shorter than today’s refresh cycles. Moreover, cloud and virtualization might change
the approach. Having maintenance is a key element of the economics may be revisited in 2013.
Help Desk
The help desk is changing or is about to change in the future. End users in many cases now
believe (and in certain cases are likely not far off the mark) that given their core competency with
the consumerization of IT, deferring a call to a level 1 agent may not be optimal.
This technology refresh cycle may be an optimum time to introduce the advanced levels of selfhelp for mobility users in particular.
Chat, FAQ’s, and other self-help technologies are main stream. As consumers, end users are not
only familiar, but in many cases prefer to leverage these technologies.
The adoption of self-help has already crossed over in the consumer marketplace. Think
about the airport kiosks in use, the ATM for banking, and the adoption of mobile applications
for certain day to day activities by the consumer. To a degree, our end users are somewhat
anticipating corporate IT to take the same direction, and may actually be somewhat confused
why it is not occurring at a rapid pace.
From a cost perspective the potential could be dramatic. A helpdesk call depending upon
the content and complexity could be in a per call range of $12-$25 per call.1 Self-service can
be measured in terms of single dollars per call. Of course, there would be a one-time only
infrastructure cost to set up the practice levels.
This further points out the significance of this technology refresh cycle- if the infrastructure is
to be established to provide self-help as an alternative service level, now is the timing, since
consumerization, BYO and virtualization trends would suggest the requirement for such a service
level in the near future.
Asset Management
Asset management for hardware and software has always been the most critical building block
for client lifecycle management. All of the other lifecycle elements have a key dependency on
asset management in order to optimize that lifecycle operation.
That has been one of the issues over time with asset management- the benefits may not seem
so obvious to non-IT/lifecycle oriented teammates. As a result, like the training budgets, it has
always been a challenge to secure appropriate level funding for asset management.
With all of the various initiatives woven into this technology refresh cycle, this cycle is one that the
practices for hardware and software asset management should be strengthened.
This premise is for a variety of reasons:
1. Security requires it, you cannot manage data that you cannot locate
2. You cannot manage in a world of device diversity, information that cannot be tracked
3. Refresh requires specific identification
1. Alinean and HP TCO ROI Tool, and InfoTech
white paper and report “Steer Clear of Steep
Help Desk Costs”
8
4. Software needs to be identified and rationalized
5. Megatrends must have a foundation to be built upon
9. White Paper | Closed Loop Lifecycle Planning
Project Management
This technology refresh cycle and process should not be viewed as a standalone project. This
refresh is highly inter-related to almost every other IT and business project in the enterprise.
For years, the refresh process stood alone and was focused merely on replacing old devices
with new devices. This simply is not the case today. Today’s conversation needs to be more user
centric- what does the end user require to do their job?
Project Management
Like for like replacement misses the point that the ways technology is used has changed during
the timeframe that the current installed base has been implemented.
Management Tools
It is highly likely that the management tools that were in place when the desktops and notebook
PCs were deployed 4 to 5 years ago may (or may not) be adequate for the new requirements. The
requirements have changed based upon.
MDM (Mobile device management) was not as pervasive as this refresh cycle, nor is the advent of
home offices.
Management tools require more focus on security as well as spyware, malware, andante-virus.
The asset management, patch management and security management all now somewhat
bundled as a set of suites to address the current threat and regulatory requirements.
Sustainability
Sustainability is one of the strategies that are depicted on virtually all businesses websites. Every
business defines the commitment to the environment.
This technology refresh is a sustainable refresh cycle. Older technology – desktops and notebook
PCs- that are 4 years old or greater, simply consume much more power than the current
counterpart. This is not to imply that there is anything out of the ordinary with the products,
simply consider the U.S. Government standards EnergyStar.gov as an example.
Just comparing a 4 year old desktop to a current desktop could yield as much as $30 per year of
power consumption. This is without any power management considerations. If the desktop were
retained for 3 years as its useful life, the savings to be seen are 3 years X $30 or $90 over the 3
years. If a desktop acquisition price is $600, then the sustainability savings approaches 15% of the
acquisition pricing.
The above commentary assumes that there is not a power management strategy in place. This
is where sustainability and asset management do relate to each other. Managing the fleet of
desktops and notebook PCs, Windows 7 has considerable power management settings that can
effectively manage power consumption. Combined with the OEM power management schemes
(such as HP Power Management) the power consumption can be further reduced.
There are also third party software manufacturers that deliver power management solutions that
complement both the OEM and Windows 7 capabilities.
With all of this said, there is another $20 to $25 per device per year savings potential for
power management.
The combined result of the impact of technology sustainability, Windows 7, OEM power
management and potential third party, there is an opportunity to reduce the power consumption
$50 to $55 annually per device or $150 to $165 over three year useful life. This represents almost
one third of the acquisition costs.
9
15. White Paper | Closed Loop Lifecycle Planning
The Global Aspect
Client lifecycle management is greatly facilitated where there is density, centralized locations,
and a single geography. Unfortunately, for most businesses that simply is not the case.
BYOC
Every country, city, state, and other considerations often have their own unique sets of rules
and regulations regarding client devices. In some cases these requirements revolves around
who can perform the work on the various lifecycle elements. Whether required by practice or by
relationships, these are serious approaches to be taken.
When looking at configuration, the degree of security and counter measures may also be
required or restricted.
The networking infrastructure to support various alternatives whether the solution id WiFi, VPN, or other networking bandwidth, it may not be reasonable to assume that there is
adequate capacity to support the installed base. Such considerations are critical for the virtual
and cloud solutions. If the data center is at or near capacity that could also be a limiting factor
for alternatives.
The megatrend of emerging markets is such that it cannot be assumed that there is an IT
infrastructure in place such as networking capabilities.
Global lifecycle management is not a “one size fits all’ proposition, and while perhaps not fully
unique in each country and locale, currencies, taxes, labor rules, and other considerations
makes that comparison of the TCO much more complicated.
While there will be consistencies in governance, the policies, processes, and procedures to
achieve the governance will likely vary.
As globalization continues, the role of IT in establishing lifecycle parameters will also change.
In the section preceding this, we discussed the new emerging IT organization. This organization
is borne not only of technology converging, but also out of the management of diverse dives
being managed in a diverse geographic setting.
In terms of global technology refresh the expertise required varies from vertical segment to
vertical segment. The requirement to establish a cross functional project team is highly desirable.
15