Traditional outsourcing lacks the flexibility and agility today’s organizations require to obtain data-driven business intelligence that provides a clear, comprehensive view of their operations. Cloudsourcing is a newer, more nimble model that has emerged to fill the gap. This whitepaper, authored by industry-leading experts at OneSource Virtual, explores opportunities available through cloudsourcing and Software as a Service (SaaS).
Agile Methodology explained simply, for those unfamiliar. Great to change organisations from waterfall to agile ways of working. Step by Step. Agile Project Delivery simplified. Stakeholders will understand clearly what their role is in implementation. Answers common questions, what is a Scrum Master etc
Traditional outsourcing lacks the flexibility and agility today’s organizations require to obtain data-driven business intelligence that provides a clear, comprehensive view of their operations. Cloudsourcing is a newer, more nimble model that has emerged to fill the gap. This whitepaper, authored by industry-leading experts at OneSource Virtual, explores opportunities available through cloudsourcing and Software as a Service (SaaS).
Agile Methodology explained simply, for those unfamiliar. Great to change organisations from waterfall to agile ways of working. Step by Step. Agile Project Delivery simplified. Stakeholders will understand clearly what their role is in implementation. Answers common questions, what is a Scrum Master etc
A creative solution for recruiting, vetting and training job candidates. Our system is a cutting edge, highly creative method to recruit the best candidates then train them on an exact digital clone of your work environment. We submit a file for each candidates which contains, their resume, background check report, and video files of their initial interview, a computer software skill assessment and clips of their ongoing training.
Our educators train prospects one-on-one in our lab environment while we also provide candidates with VPN access for off-hours training from their homes. SAS totally prepares your new staff presenting them only when they have completely mastered their job functions.
Throughout the process they are our, not your, employee. Prospects weeded out early on, never hit your HR books. We assume the responsibility and costs so our clients reap the benefits.
Millennium Pharmacy Takes SaaS Model to New Heights Via Policy-Driven Operati...Dana Gardner
Transcript of a BriefingsDirect podcast on how a major healthcare provider has used advanced IT management and operational efficiency processes and systems to keep applications up to date, compliant, performant, and protected.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
Intralinks Uses Hybrid Computing to Blaze a Compliance Trail Across the Regul...Dana Gardner
Transcript of a sponsored discussion on how regulations around data sovereignty are forcing enterprises to consider new approaches to data, intellectual property, and cloud collaboration services.
DBA on the Cloud – Is this the Present and the Future!Durga Prasad Tumu
This Article discusses about the Present and Future scenarios of DBA combined with cloud.To find more of these type of Interesting Technical Articles you can find them http://blog.amzur.com
Cloud Approach to IT Service Desk and Incident Management Bring Analysis, Low...Dana Gardner
Transcript of a BriefingsDirect podcast on how SaaS delivery and BMC Software’s “truth in data” architecture deliver big payoffs for IT support at Comverge and Design Within Reach.
ExperiaSphere: Open-Source Management and Orchestration--Introductiontnolle
This presentation is an annotated set of tutorial slides on the ExperiaSphere(TM) model of open-source management and orchestration. It provides an overview of how to build on open-source tools like Linked USDL, OpenTOSCA, RTG, and OpenNMS to build the basis for management and orchestration for legacy networks, SDN, NFV, and the cloud.
DevOps and Security, a Match Made in HeavenDana Gardner
Transcript of a Briefings Direct discussion on the relationship between DevOps and security and exploring the impact of security on compliance, risk, and auditing.
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
Transcript of a discussion on how scaling of customized IT infrastructure for a hosting organization at a multitenant environment is getting great benefits.
JavaOne 2015 Devops and the Darkside CON6447Steve Poole
So you get DevOps. You like the idea and think it’s important. The trouble is that others in your team don’t. This session will help you understand how to convince your team of the benefits of DevOps. Packed with facts and figures, the presentation works through the common challenges Java teams face when moving to a DevOps model and outlines how to address them. It also shows you how to balance evangelism against pragmatism when championing DevOps in your organization. You’ll learn how others have made the transition to DevOps and understand what mistakes to avoid when doing so. Whether you need to know how to be a DevOps evangelist or simply want to understand why DevOps is important, this session is for you.
A creative solution for recruiting, vetting and training job candidates. Our system is a cutting edge, highly creative method to recruit the best candidates then train them on an exact digital clone of your work environment. We submit a file for each candidates which contains, their resume, background check report, and video files of their initial interview, a computer software skill assessment and clips of their ongoing training.
Our educators train prospects one-on-one in our lab environment while we also provide candidates with VPN access for off-hours training from their homes. SAS totally prepares your new staff presenting them only when they have completely mastered their job functions.
Throughout the process they are our, not your, employee. Prospects weeded out early on, never hit your HR books. We assume the responsibility and costs so our clients reap the benefits.
Millennium Pharmacy Takes SaaS Model to New Heights Via Policy-Driven Operati...Dana Gardner
Transcript of a BriefingsDirect podcast on how a major healthcare provider has used advanced IT management and operational efficiency processes and systems to keep applications up to date, compliant, performant, and protected.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
Intralinks Uses Hybrid Computing to Blaze a Compliance Trail Across the Regul...Dana Gardner
Transcript of a sponsored discussion on how regulations around data sovereignty are forcing enterprises to consider new approaches to data, intellectual property, and cloud collaboration services.
DBA on the Cloud – Is this the Present and the Future!Durga Prasad Tumu
This Article discusses about the Present and Future scenarios of DBA combined with cloud.To find more of these type of Interesting Technical Articles you can find them http://blog.amzur.com
Cloud Approach to IT Service Desk and Incident Management Bring Analysis, Low...Dana Gardner
Transcript of a BriefingsDirect podcast on how SaaS delivery and BMC Software’s “truth in data” architecture deliver big payoffs for IT support at Comverge and Design Within Reach.
ExperiaSphere: Open-Source Management and Orchestration--Introductiontnolle
This presentation is an annotated set of tutorial slides on the ExperiaSphere(TM) model of open-source management and orchestration. It provides an overview of how to build on open-source tools like Linked USDL, OpenTOSCA, RTG, and OpenNMS to build the basis for management and orchestration for legacy networks, SDN, NFV, and the cloud.
DevOps and Security, a Match Made in HeavenDana Gardner
Transcript of a Briefings Direct discussion on the relationship between DevOps and security and exploring the impact of security on compliance, risk, and auditing.
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
Transcript of a discussion on how scaling of customized IT infrastructure for a hosting organization at a multitenant environment is getting great benefits.
JavaOne 2015 Devops and the Darkside CON6447Steve Poole
So you get DevOps. You like the idea and think it’s important. The trouble is that others in your team don’t. This session will help you understand how to convince your team of the benefits of DevOps. Packed with facts and figures, the presentation works through the common challenges Java teams face when moving to a DevOps model and outlines how to address them. It also shows you how to balance evangelism against pragmatism when championing DevOps in your organization. You’ll learn how others have made the transition to DevOps and understand what mistakes to avoid when doing so. Whether you need to know how to be a DevOps evangelist or simply want to understand why DevOps is important, this session is for you.
This slide contains a brief presentation of how Organizations can leverage Cloud to virtualize functional/performance testing and cost benefit from investing in hardware.
TierPoint white paper_How_to_Position_Cloud_ROI_2015sllongo3
Traditional ROI calculators do an ineffective job of measuring the value of cloud services. This white paper serves as a guide to calculating cloud ROI using seven metrics you may not have considered.
For creating a successful business case for cloud storage, there are certain things that you need to look you, which also counts in the benefits and risks. Here is a detailed explanation on how you can design an efficient cloud storage business case for your organization.
One for the first steps towards a successful cloud migration is obta.pdfezzi552
If you buy a new car, the entire purchase is counted as consumption in the year in which you
make the transaction. Explain briefly why this is in one sense an “error” in national income
accounting. (Hint: How is the purchase of a car different from the purchase of a pizza?) How
might you correct this error? How is housing treated in the National Income and Product
Accounts? Specifically how does owner- occupied housing enter into the accounts? (Hint: Do
some Web searching on “imputed rent on owner-occupied housing.”)
Solution
This is in one sense error because purchasing car may be considered also investment that leds to
production of services particularly when it is used for commercial use also. We can correct it by
listing it as investment
The owner occupied houses are treated by I\'m putting Rents to them.
Don't Add Risk And Double Investment Requirements By Estimating Project Budge...Ed Kozak
Many organizations need to set aside funding to conduct their own projects. How much funding, though? Poor project estimating leads to poor business forecasting and that can have a big impact on your organization's success. Learn the 4 methods typically used to estimate projects and the error (and risks) associated with each.
Infrastructure-As-Code means that infrastructure should be treated as code – a really powerful concept. Server configuration, packages installed, relationships with other servers, etc. should be modeled with code to be automated and have a predictable outcome, removing manual steps prone to errors. That doesn’t sound bad, does it?
The goal is to automate all the infrastructure tasks programmatically. In an ideal world you should be able to start new servers, configure them, and, more importantly, be able to repeat it over and over again, in a reproducible way, automatically, by using tools and APIs.
Have you ever had to upgrade a server without knowing whether the upgrade was going to succeed or not for your application? Are the security updates going to affect your application? There are so many system factors that can indirectly cause a failure in your application, such as different kernel versions, distributions, or packages.
When allegations against the Foxconn manufacturing plant—where Apple, Samsung, and Microsoft make large portions of their electronics—were first leveled in 2012, American consumers sure did seem angry. We were irate that their 1 million workers were grossly underpaid (or sometimes not paid at all), that 14-year-olds were making iPhones and Xboxes, and that the factory actually responded to people defenestrating themselves—seeking death rather than more work—by installing safety nets. As is so often the case, anger is all we could muster.
First and foremost, some perspective, although I feel this would benefit all IT Architects, my examples are based on personal experience from the better part of 16.5 years with 1.5 years consulting, 15 years as full time employee for GMAC FS / branded Ally FS / now Ocwen Financial, Dell/Perot. You’re more than welcome to review my biography/profile at www.linkedin.com/in/virtualos/ to validate if my opinion is worth the time. When I left GMAC FS, I was managing three teams and continued to wear the hat of Enterprise Architect and Citrix Architect.
Storage giants NetApp (NASDAQ:NTAP) and EMC (NYSE:EMC) are making efforts to shift away from the tag of pure “storage companies”. In its Q2 earnings call last year, NetApp management pointed out that they are more of a “data management” vendor and not just a storage hardware company. [1] This statement signaled the company’s intention to evolve from being predominantly a hardware manufacturer to an end-to-end storage solution provider. A similar trend was highlighted when NetApp announced its agreement with VMware (NYSE:VMW) to integrate its Clustered ONTAP Drive with VMware’s vCloud suite last year. [2]
If we don’t balance the human values that we care about with the compelling uses of big data, our society risks abandoning them for the sake of mere innovation or expediency.
Ben Torres/Bloomberg via Getty Images
These days, everyone seems to be talking about “big data.” Engineers, researchers, lawyers, executives and self-trackers all tout the surprising insights they can get from applying math to large data sets. The rhetoric of big data is often overblown, exaggerated and contradictory, but there’s an element of truth to the claim that data science is helping us to know more about our world, our society and ourselves.
Data scientists use big data to deliver personalized ads to Internet users, to make better spell checkers and search engines, to predict weather patterns, perform medical research, learn about customers, set prices and plan traffic flow patterns. Big data can also fight crime, whether through the use of automated license-plate readers or, at least theoretically, through the collection of vast amounts of “metadata” about our communications and associations by the National Security Agency.
In a world where the Internet of Things (IoT) produces massive amounts of data from mobile devices, vehicular systems and environmental sensors, data scientist will be tasked with what to do with all of this information. We sat down with Cristian Borcea, PhD from the New Jersey Institute of Technology to discuss the IoT and Big Data applications.
insideBIGDATA: It seems we can’t turn on the television or read a newspaper without encountering the phrase, the “Internet of Things”. Aside from being a marketing term, what does this really mean?
Netgain is a healthcare IT provider
offering secure, reliable and
affordable solutions for complex
IT deployments, hosted electronic
medical records (EMR) and practice
management systems. The company
provides solutions to hundreds of
healthcare organizations across the
United States, often within terminal
server environments.
In response to the rising importance
of eHealth initiatives, the company
has developed a unique eHealth
Architecture, which offers a fully
integrated approach to healthcare
IT and open-platform solutions that
suit the specific needs of healthcare
organizations. Netgain’s IT
outsourcing and 24-hour helpdesk
eliminate the need for expensive
in-house hardware and support,
enabling healthcare organizations
to focus on caring for patients,
instead of worrying about IT concerns.
Printing is a basic business requirement. Yet many of us do not have the capability to print from anywhere using any device as situation demands. Many IT departments are still struggling with the complexities of printer driver management.
Meanwhile, virtualization and cloud computing, mobile work style and BOYD trends add more challenges to enterprise printing, and brings security and compliance issues to the top of the agenda for C-level executives. Print security is much more than protecting gigabytes in a sealed room. Consider the risks of security breaches and potential damages when a sensitive document is printed and left unattended on a shared printer in a public environment. Forward-thinking organizations must future-proof their printing environments to provide worry-free printing with security and compliance at reduced costs.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
GuideIT - Virtual Economies of Scale
1. VOS Cloud - Economies of Scale
By: Brian Murphy || http://www.linkedin.com/in/virtualos || www.guideit.com
VOS Cloud - Maximize Economies of Scale
VOIS = Virtual Operating System Infrastructure versus Virtual Desktop
VOS = Virtual Operating System
ROL = Resources on Loan (Dynamic Capacity)
Imagine a larger company with 8,000 servers (50 percent virtual / 50 percent physical) that has
just finished “optimizing” their environment byusing server virtualization, in
the conventional sense, to remove 2000 servers from the environment. This is no doubt a significant c
ost save froma hardware, operating system and application agent software. Depending on the virtualiz
ation
platform used there is a varying degree ofexpense that does offset the initial cost save but you get so
me of that back with operational efficiency inherent in virtualization. That is just theway it works – you
have to spend money to save money.
Consider next, although I reduce my server count by
2000 by moving a few applications around, archiving applications, or simply purchasing
newer more powerful hardware that allows for shutting down 4 physical servers and
replacing those with 2 virtual servers. Regardless, Idecommissioned 2000 servers. Looking at
the data center as a meter – is that meter running at 1% utilization
(or 5% for that matter) or 70%utilization?
Which is better, in your opinion? 8000 servers running at 25% utilization on average or 2000 servers running at 70% utilization
synchronizedwith a 500 server agnostic dynamic resource pool providing ROL (resources-onloan) as needed and
reclaiming those resources after fillingthe job 1 is done ready and waiting for job 2. The tweaking proc
ess consists of making
sure the resource pool is using the least amount ofresources possible so that your ROL do not sit idle
for more than 10-20 seconds. Before you
answer, draw conclusions, or consider all thevariables – please continue reading. At this point, we hav
e only scratched the service.
50 Percent Reduction in Server Count
Now, I ask you
to consider a scenario where 2 weeks later I/you/we present a plan that provides the customer (or you
r current
employer!) ablueprint for removing 5,500 servers from the environment after just removing 2000 serve
rs.
However, there is a catch – something I have not discussed as of yet. Yet, for a moment,
let us assume there is no catch and just take it at facevalue and your manager believes it possible.
How much support can we expect from the multiple vendor’s after they learn we are changing
theway we host servers from static to dynamic and our new measurement is not number of servers but
instead percentage utilization of all resources in the data center. If I represented the vendor, I
am not sure that I could receive this news and consider it positive. If it were, I
2. wouldbegin consulting my senior management as to next steps. More than likely, next steps would
include a re-negotiated contract with higher rates –for starters. Can you blame them?
Consider the following, “for this to work requires that we reduce the server count significantly and
for every dollar saved is a dollar lost bymultiple vendors”. Not that I can see the future but I
do foresee it causing some “strained” relationships. Consider the vendors impacted relative
to the removal of 5,500 servers – that is a lot of operating systems, inventory agents, antivirus agents, patching
agents, SOX agents, HID’sagents, and the list goes on depending on your environment.
So, the cost save is not just limited to 5,500 servers.
Hopefully you can see where this is going, not only did I eliminate 5,500 servers or virtualnodes from t
he environment but 5,500 operating systems, patch agents, inventory agents,
et cetera – what about the server to administratorratio? Data center power requirements? Data center
foot-print / space requirements – where I was running out of space and needing to expand now I can
expand and have room to grow.
Before we discuss the technical
details of how we accomplish this goal I need you to consider that it begins at server optimization, mov
es todesktop optimization
then to process optimization. In other words, we have created the foundation but the road map
relative to optimization and
leveraging virtualization to maximize economies of scale is an infant technology.
Dynamic Data Center does not stop
at the server optimization. Actually, desktop virtualization is closer to achieving this concept than
servervirtualization. But, before we cross this bridge we need to examine the key
differences from purely an IOPS perspective. What is the major
difference between server virtualization and desktop
virtualization? Exactly – reads versus writes; virtual servers just sit there for the most partand read a fe
w things when necessary but that is not the case with virtual desktop. Just to be clear, I’m referring to r
ead only, static desktopswhere you limit the desktops to a
few core Golden Images (less = more money saved for our company/customer). The “hypervisor” is n
ot our topic of discussion at the moment but it is a key consideration from a cost perspective.
All Talk no Action
This is where things start to get real interesting because of brand
loyalty. Loyalty at the customer and vendor level – as I hinted to earlier withthe vendor strained relatio
nships; well, that strained relationship and loss of income does not bode well for anyone taking
advantage of the vendor sponsored trips, training, and
gifts. You cannot implement this level of change without and
expect much support from any vendor that stands to lose money in
the sense that reduction of physical or virtual
assets by 50 percent results in dramatic savings for the company thatutilizes this strategy.
For every $1,000,000.00 we save the company multiple vendors lose money. Our gain is
their pain and you can be assured once they find out they will do anything
necessary to defend their territory and
they will start by aligning with the internal team relationships and executive sponsors.
Same concept as “can’t win for losing” because to save money requires I set fire to a few bridges. On t
he other hand, my job is to save thecompany money. That is one of the areas
3. that I excel and any company that utilizes this new methodology will save a lot of money at the vendor
expense but that is not a reason to halt progress. Vendors will adjust and
more than likely they are next in line to implement
because the costsave from the operational perspective and return on investment is coming.
As I indicated earlier, a few companies
have already started and way ahead of the curve relative to customizing
their shared pool resources to
service 1,000’s of servers with a few hundred dynamically allocated servers when needed then shut d
own and reallocated to the next job.Dynamic Data Center; so where do we go from here?
It was a reminder of what I preach
but have failed to accomplish – all talk no action. If given the opportunity, this is one of the personal vi
sions Iwould like to pursue. Anyone can write about the future of data center and
dynamic capacity on demand. The fact is, it is easy to justify in your own mind
but requires more than a concept on paper to convince senior leadership and senior leadership to
convince the CIO who then must convince the CFO and CIO. Not to
mention all the paperwork requirements; Cost Benefit Analysis, ROI Analysis, OPEX Analysis, CAPEX
Analysis years 1 to 10. That assumes you can take everything in this blog and consolidate down to 3
slides!
Contribute defined as mutually beneficial for me and my company. I’ve learned to suppress my “vision”
so that I can adopt the company vision and the team
vision. I have not given up on my vision per se but placed it on hold until so that I
can focus on doing my job rather than pursuit ofwhat I feel is the next evolution of the data center and
how we use [or better stated as don’t use] virtualization technology, our perception of the term “private
cloud”, and the impact to all infrastructure support teams.
The more I thought about that original question (Reference Below) the more difficult my response bec
ause for me to do my job requires a levelof change that requires executive sponsorship and despite th
e significant cost savings (OPEX and CAPEX) requires the full cooperation andcompromise of
all IT infrastructure teams impacted by the new data center design.
I think that most people desire the opportunity to utilize what they consider to be all of their core streng
ths to create a positive and lasting
change and if given that opportunity would at least consider the option – depending on the risk / rewar
d. For me, the reward is substantiallygreater than the risk – assuming all criteria
are met. That assumes, however, that all the requirements are clearly stated with sign off and full
commitment from executive leadership.
The concept is not new by any means; the “secret sauce” is
in the design and implementation. The implementation being the hardest part of the equation
due to not only does it require a change to the infrastructure but more importantly requires that all
the infrastructure teams change howthey interoperate with each other and the customer.
Regarding the “design”, as with anything there is a “right way” and a “wrong way” and the “wrong
way” typically occurs by hiring a resource ormultiple resources that are not prepared
to implement this level of change. On the other hand, it does not require hiring multiple thirdpartyexpensive consulting firms to the point of negating a large percentage of the OPEX and
CAPEX savings the first few years.
It starts with one person and one “role” and the expectation
that all the infrastructure teams are engaged day one. The core infrastructure teamsare one of several
4. factors that determine success of the change before, during and afterwards. This person is responsibl
e for the design but that is only the beginning.
Although the technology has improved somewhat (printing, application
streaming, policy management, centralized access and control) and theintroduction of server virtualiza
tion on mass scale created a methodology for reducing server footprint it did
not necessarily provide asubstantial cost savings from a OPEX and ROI perspective; CAPEX, sure, O
PEX and ROI depends on whether or not you implemented the “wrong way” or the “right way”.
Assuming it was implemented the “right way”
you’re cost savings is limited to the number of virtualized servers/desktops hosted per physicalnode. I
n some cases, I’ve seen where this ratio was purposely reduced so as to limit the number of nodes los
t in the event of hardware failure.In other assessments, I’ve seen a virtual server allocated for every n
ew request from the business to the point where there was half a server peremployee (6000 users and
3000 virtual servers). Virtualization
sprawl is still a problem today. Dynamic Data Center addresses that problem,and more – a properly d
esigned dynamic data center this number could be 500 depending on the applications and final shared
pool allocations.
My ideal career goal is probably better defined as a “career vision”. After years of implementing
change utilizing virtualization technology only tosee the cost benefits negated because of the limited
scope of use; something akin to having
Windows 7 Ultimate installed with 2010 OfficeProfessional Plus, Project 2010, and Visio 2010 for a us
er that browses the internet and occasionally uses Microsoft Word.
This is a good example of how Citrix is utilized in a lot of companies
today. However, this does not begin to scratch the service when you look at economies of scale and
take a single variable such as processor utilization for all systems measured at 28 days by
24×7 by 7 days per week. If you exceed 5% on average then you deserve a pat on the back and a hall
pass but what if I told you the goal is 70% utilization because of the
wasted hardware sitting in your rack and
the virtual machines are the laziest of the bunch so I want those gone first – what do you do? Actually,
once senior management realizes and acknowledges this as
a problem is where we discuss the “solution” and present the “non- negotiable”requirements.
Expense of Doing Business
IT will always be thought of as a necessary expense of doing business [because it is an expense]
and we have a long way to progress relativeto data center optimization. Virtualization technology was
a step in the right direction but for every “right” implementation there is an offsetting “wrong”
implementation taking us two
steps back – for example, virtual server sprawl has managed to negate the benefits. There are betterw
ays and I don’t hold the patent, there is
a smallgroup of architects that know exactly to what I’m referencing. The proposed concept is not new
;it is the amount of change required to the architecture and the impact to the architecture/engineering
teams. To be frank, the battle could bewon or lost here; however, as stated earlier, this is where
I would play a big role.
Moving forward, I feel it is the next evolution for data centers searching
for that next “cost savings idea” should look no further and continue reading. If
you are ready to go beyond the basic server, desktop, application, and service virtualization
5. to a “shared infrastructure services”concept that requires consolidation of like services/applications me
rged with virtualization and a minimalist shared resource pool that is autonomous.
Dynamic Data Center 3.x [I’m not claiming fame for the name just using as example]. The definition
[keeping in mind this is just my opinion and the context is a theoretical job].
I am not talking about server virtualization – server virtualization is part of the equation
but in some cases increases your cost due to virtual sprawl where it becomes too easy for operation
teams to turn up another server for Business App123 rather than integrate to a shared virtual farm
that is configured
to add and reduce resources as needed so that your economies of scale use averages 70% utilization
on average.Although part of the equation this is one of the areas that require the most optimization.
Server virtualization served the purpose intended but serves no purpose if
at the end of the day your virtual node count equals or exceeds yourprevious physical count. I still hav
e all the same administration costs of the OS for every virtual node. The next phase is to
adapt this concept further and instead of having static dedicated virtual servers you have common
pools of shared services where capacity is constantly being
added and removed so that in theory a 100 virtual servers should
does the work of 1000 virtual servers. Of course, this is merely one example
of where the data center can and should be optimized.
This is where I would begin my new role > design and implementation of Dynamic Data Center. It
is a leadership role that is technology centric where
all the key players might have a dotted line to me but instead of managing
people I am designing a custom fit architecture that meets theneeds of the company and more importa
ntly needs of the shared infrastructure teams to assure their success by creation of a shared managed
solution that enforces economies of scale.
In other words, I’m managing the vision of what will be and
leveraging any and all resources that allows me to achieve a shared
servicesarchitecture, shared services team, and dynamic allocation of resources with a target data cen
ter utilization of 70% 24×7 and an infrastructurethat allows for significant or minor capacity increases t
hat are seamless and uneventful.
Maximize data center economies of scale utilizing a shared
infrastructure architecture where all the components meet the requirements ofhosting a dynamic alloc
ation of physical and virtual resources to sustain a utilization rate of 70%, on average, 24 hours a day
and 7 days perweek and without constraint to expand physical capacity
when required. Shared resource pools of common platforms are then created for thepurpose of consol
idation and the initial
pool size based on the average use of the combined applications rather than any single requirementw
here more capacity is required at specific time or event driven (such as marketing). Utilizing
the minimalist approach for each resource pooland allowing dynamic resource allocation to compensa
te for capacity utilizing a secondary autonomous pool to provide resourceson- loan(ROL) based on real-time demand then returned as demand
decreases or instead of returned loaned out to a different resource pool.
ROI is maximized by tuning the ROL (resources-on-loan) to the minimal requirement
necessary to sustain all the primary resource pools bydynamic allocation of resources when and
where needed and does so in a way that all customer SLA’s are achieved but rather than dedicated
hardware or virtual servers running at 2% or even 30% and where the design called for 5 servers to
meet intermittent capacity
6. concerns itbecomes a minimalist allocation and borrowed resources when needed.
To summarize, the number of physical and virtual managed nodes should
decrease significantly due to the majority of servers sitting idle for the majority and having
enough in common
from a platformperspective that the hosted application (website, business application, MSMQ, SQL, et
cetera) can be consolidated to shared resource pools(they do not need a dedicated
server whether physical or virtual) for a majority of the applications but for those applications that do
require theirown dedicated servers are still part of the equation and will never go away completely but
that would be the goal eventually. Once the dynamicprivate cloud is online you then start to
engage the various vendors with the new requirements and simply add this to
the wish list. (Note: This is the abridged version! – I state this for those that are contemplating all
the variables I left out but instead I feel that I hit the major items and I havea LinkedIn
constraint on # characters).
Ideally, I am looking for a scenario where a company wishes to implement a virtualization
solution or private cloud for purposes of creating adynamic datacenter which can only be achieved by l
everaging all the shared services team and cooperation of all the impacted
teams(Storage, Networking, ISO, Middleware, Windows, Midrange, Telephony, Desktop Support, et c
etera) toward a common goal to create ashared services infrastructure to support the coowned dynamic data center. Most places I've visit, seem to take a siloed approach to IT where it is
"each team for themselves", lack of any communication, some teams I'm told even hate each
other. This idea would never work in that environment unless you fire everyone and start over.
Dynamic data center is “adaptable”; for example, capacity on demand
private cloud solution that hosts the virtual infrastructure, businessapplications, development services,
and other services typically covered under the term “cloud”.
It makes sense on paper but in reality it is
a "paradigm shift at the infrastructure level that impacts how architecture and engineering
teams communicate today" because it requires every infrastructure team transition from isolated
silo to shared infrastructure
methodology. Inother words, they lose some control over their environment for the greater good of
all teams. This can be very intimidating after working in a very
restricted silo to the point where OLA’s are required between teams within the same IT organization.
Executive leadership acceptance is required and the commitment extends beyond just
a “yes” or “no”. It is crucial that everyone impacted by the change understand
this as the best long term direction and commits to an effective communication strategy to all leadershi
p teams and thenarchitecture and engineering
teams so that everyone fully understands the wind of change is headed their way. Full
disclosure but presented in
a way that allows for input from all impacted teams so that they become a part of the solution. If
I am leading the change this is non-negotiable.
Change such as this cannot be accomplished without the unfiltered
communication and the ability to establish and utilize every resource. To a
certain degree, everyone needs to feel they are part of the solution and the reasons for moving
this direction and open dialog with all the teams including a Q&A session prior to any changes.
You have buy in from top leadership down and effective communication
so that everyone knows which way the ship is headed and those who donot wish to compromise or ha
ve shared responsibility versus isolation they have
the option to leave and we fill the open slot with someone thataligns with the goals of the organization
7. but at some point when this concept takes hold
those individuals will quickly run out of options as morecompanies
move to a dynamic data center with shared services team versus isolated services and teams.
Most of the infrastructure leaders in the already know (although the CIO probably does) that the datac
enter is underutilized (utilizing 2-3%) of the processing power as an average due to
multiple examples of physical hardware sitting in
the racks to run a batch report at midnight with anSLA of under 10 minutes so to
meet the SLA the company purchases 12 physical 2U servers running distributed processing
[at the softwarelayer] to accommodate this one task versus spinning
up 12 or even 20 virtual machines at midnight, run
the process, process finishes underSLA, spin the virtual machines down or at this stage you are movin
g those resources based on requirements at that given moment rather thanphysical
hardware sitting in racks doing nothing 23.9 hours of the day.
Server virtualization is somewhat better but can actually increase your cost due to virtual
sprawl where it becomes too easy for administrators tosimply turn up another server for Business App
123 rather than integrate to a shared virtual farm
that is configured to add and reduce resourcesas needed so that your economies of scale use averag
es 70% utilization on average. I have a lot of other examples I am limited by # of
characters in my response so I would try to summarize as follows;
Once you have senior management convinced and prior to the announcement you
need someone who can coordinate with the leads from each
impacted team to design the new architecture. Design of the new architecture is generally the easiest
part of this equation. The hard part is selling the vision to IT teams that operate in silo and changing
the thought process from
“my team” to > one team > one IT > one common vision> dynamic data center.
The goal being 70% utilization utilizing a core set of shared
services and allocation of resources when and where needed. Why run 5 webservers for anticipated
load between 3 and 7 versus starting with two and spinning up resources as certain processes reach
pre-defined thresholds then spinning those resources down and turning
up different resources under a different loadbalanced VIP because website B nowrequires those resources.
This same dynamic applies to all services including application servers.
The target corporate audience is unique; the company must be large enough – meaning, they
already have the dedicated
storage team,dedicated Midrange team, dedicated Middleware Team, dedicated Windows Server Tea
m, Dedicated AD Team, Dedicated DNS Team andwith the help of top-down management
create one team with a common vision versus 610 teams with their own vision and OLA’s between thevarious teams rather than one SLA with the bus
iness and the architecture becomes a shared resource and responsibility among all the teams
with each team being responsible to the customer and the organization.
Most companies are willing to utilize the technology and
the perceived negative impact it will have to any IT organization that operates with aboxedin / silo where each infrastructure team operates independent of the other and the way certain
practices such as incident management can create an environment where these boxed-in
teams become very protective of their box and will do anything to prove it is not their issue
instead of working together with all teams to simply find the problem and
utilizing a system of reward for finding
8. the problem versus a series ofdemoralizing meetings pointing out all the wrong and financial
impact to the business. Don’t get me wrong, there needs to be a system ofaccountability
but there are better ways of holding teams accountable that do not force teams into isolation and
having to defend themselves or point the finger at another team
so as to get out from under that spotlight. There are better ways, and I bring this up as
an example of what canbe a side effect of the change I would like to implement.
What essentially amounts to a position created out of thin air and a hall pass with no restriction to
create the necessary changes to implement adynamic data center where economies of scale is
a requirement (not a goal) and at the end of every month I
can produce a charge back reportfor every BU for services rendered that is not “funny money” or
a report to the CFO that shows the valid cost of doing business.
In summary, this is my goal. Dynamic Data Center as I described above or some variance thereof is o
nly a phone call away. If you are a CIO orboard member and
happen to agree with my analysis then feel free to contact me through LinkedIN – I’m
ready when you, the board ofdirectors, CFO, COO, and CIO are ready to sign on the dotted line.
Until then, talk to your leadership and see what they think about this idea? What road blocks do you
foresee.
The Few The Proud
To do this you must know all aspects the technology stack and
not just that all the technology options in the technology. You cannot just knowServer 2008 R2 and ho
w to fix issues. You must know AD, DDNS, SPO accounts, Kerberos Realms, Branch Cache, Quota S
ervices, DFS(ADDFS), IPP, IIS, .NET, Certificate Services, DFS, SQL 2008 R2, file clustering,
storage, networking, firewalls, AD Integrated Zones, UPN Suffixes, Exchange, Lync, Unified
Messaging, Sharepoint, Hypervisors, and the list just goes on to what sometimes feels like infinity.
Next, learn 2012 Server and 2012 SQL with all the new toys minus an easy to use GUI. It does get
better the more you use it and in
The more you know, the easier it becomes to solve problems or create solutions from thin air simply b
ecause they are there but no one is using them.
I equate it to installing the entire Office Suite, WireShark, Google Voice, Picasa, etc > then only using
Word and Outlook.
Then, not using any of the time saving
features of Outlook on top of that because I gave one example, next
throw in PowerShell, R2 Event LogConsolidator with subscriptions, this is what is used for the backend to monitor storage for example and produce
event log message based ona quota service alert that only goes to certain people but the problem is al
ready been handled by utilizing
the API from the vendor to perform adynamic storage allocation, generate a ticket, notify sales, adjust
billing, etc.
Or, dynamic moves of web servers between VIPs so I never have boot storm issues because I have 8
WI servers for 1 hour in the morning, 4additional XML servers, 4 virtual PVS
servers, as logins dwindle I start shifting those back towards XenApp
because peak load is actually 3hours later. Now, if I'm doing this for 100 customers and
using 1 environment those resources are customer agnostic. I have control of the INFand
9. my costs forever and the customer has no risk due to having a 1 way trust to my INF domain only and
no Forest Trusts - nothing else than what I just stated.
Windows 2008 R2 can handle over 12 million objects, physical
Netscalers depends on the type but everything is using ICA Proxy and VPX's forinternal VIPS that onl
y the INF sees or utilizes because we are merely a provider of desktop services. DNS is always recurs
ive and we don'thost customer users. They create a Global Group
per desktop requirement and we create a customer OU, DLG, and utilize Role BasedAccess for Micro
soft and Citrix being there are actually roles that are defined in PVS and XenDesktop that most people
ignore and it saves aton of time when it comes time to allocate resources.
This is only one example where we act as the ASP in this equation and using Branch Repeater, R2 Br
anch Cache, and other tweaks we can
leave the applications at the customer data center where they belong. Or we can build this on site but
it is much cheaper
for both of us to leaveit in our cloud and address the application thick traffic in other ways so customer
doesn't change anything on their side like Help
Desk, or ITSecurity Fulfillment with auditing if they have SOX applications.
It Does Not Take A Genius
Server virtualization is only the beginning. It is a starting point not a ending
point regardless of how many servers you "cut" there are still more tobe "cut" in
that most data centers right now are using about 2-3 percent of overall
resources when I walk in the door but after I leave they will berunning at 6070% and overall asset whether V or P should go down another 50% and that
is a lifetime customer that will tell 10 other customerswhich leads to more hardware sells not less and
this is the part most executives or business owners miss and get lost in my vision because they
think they are losing something but reducing units sold but in fact the goal is to change the industry
forever and be a leader and prove that outsourcing does not work. This works.
Companies are willing to outsource which is turning out to cost more long term where instead
this solution alone is my counter proposal to outsourcing
but not just that it is a paradigm shift in how we utilize our resources. Having a single RO Image for IN
F resources and eliminating these complex provisioning cycles. Utilizing storage hardware and
software combinations to automate the storage and turn it from a staticmanual process to a dynamic p
rocess that will automatically move up or down the stack from high speed
to low speed based on metrics that I define.