Read how IBM and NC State created a “cloud computing” model for provisioning technology that offered a quantum improvement in access, efficiency and convenience over traditional computer labs.
Data Protection in the cloud, On-premises & anywhere in between.
A company’s most important asset is its data. No matter what a business’s data is comprised of, patent filings, architectural blueprints, medical files, customer account details, etc., it is critical to have comprehensive backup and disaster recovery plans to mitigate potential loss or access.
DBA on the Cloud – Is this the Present and the Future!Durga Prasad Tumu
This Article discusses about the Present and Future scenarios of DBA combined with cloud.To find more of these type of Interesting Technical Articles you can find them http://blog.amzur.com
Read how IBM and NC State created a “cloud computing” model for provisioning technology that offered a quantum improvement in access, efficiency and convenience over traditional computer labs.
Data Protection in the cloud, On-premises & anywhere in between.
A company’s most important asset is its data. No matter what a business’s data is comprised of, patent filings, architectural blueprints, medical files, customer account details, etc., it is critical to have comprehensive backup and disaster recovery plans to mitigate potential loss or access.
DBA on the Cloud – Is this the Present and the Future!Durga Prasad Tumu
This Article discusses about the Present and Future scenarios of DBA combined with cloud.To find more of these type of Interesting Technical Articles you can find them http://blog.amzur.com
Migrating to Cloud: Inhouse Hadoop to Databricks (3)Knoldus Inc.
Modernize your Enterprise Data Lake to Serverless Data Lake, where data, workloads, and orchestrations can be automatically migrated to the cloud-native infrastructure.
MT101 Dell OCIO: Delivering data and analytics in real timeDell EMC World
Today’s business operations increasingly rely on sophisticated integration of data streaming across the enterprise. This requires an analytics ecosystem that is highly current and highly available. This session explores the infrastructure and methods Dell IT used for keeping the complex flows, integration processes, BI, and analytics operating 24x7.
Are You Prepared For The Future Of Data Technologies?Dell World
We are increasingly coming upon an age where technology is a strong enabler of business success, where there are strong synergies between business strategy and technology strategy. You often cannot discuss business strategy without data and related technologies being a big part of it. And as such, business leaders are increasingly turning to IT to compete more effectively in the market. As IT management, it falls upon you to ensure that your data technology architecture (software & hardware) is built in a way that it can handle the business demands of today and in to the future. In this session, we will discuss the various big data technology architectures and associated tools, and what role each should play in your data environment. We will also give real life examples of how others are using these technologies. Build a better data architecture, to unlock the power of all data.
MT11 - Turn Science Fiction into Reality by Using SAP HANA to Make Sense of IoTDell EMC World
Data collected from the “Internet of Things” is a reality, flooding data centers at a rapid pace! But how can you take advantage of that data in real-time? Join this session to examine how Connected Business with Dell and SAP puts that data to work for you - on-premise or cloud - to build solutions that glean real-time insights from IoT
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...Citrix
Project Avalon Excalibur delivers a powerful new unified architecture for simple, streamlined deployment of Windows apps and desktops across multiple server and desktop OS platforms. But we haven't stopped there. A successful deployment requires tools to monitor virtual desktops and applications. This session will cover streamlining of support functions, new analytics for performance management and capacity planning and best practices for aligning IT admin roles.
MT12 - SAP solutions from Dell – from your Datacenter to the CloudDell EMC World
SAP HANA has accelerated the pace of innovation – an in-memory platform that runs analytics applications smarter, business processes faster, and data infrastructures simpler. Join this session to learn how Dell EMC offers the broadest solutions for SAP HANA for any sized customer with the best performance and the most customer choices- from on-premise to hybrid to cloud.
Data Vault 2.0 DeMystified with Dan Linstedt and WhereScapeWhereScape
Join Dan Linstedt and WhereScape to learn the benefits that Data Vault 2.0 offers to data warehousing teams, what it is and isn't, and how data vault automation can help teams implement Data Vault 2.0 more quickly and successfully.
According to IDC “By 2020, consumption-based procurement in data centers will have eclipsed traditional procurement. Reducing waste, risk, and cost are goals of every IT organization. But because it’s hard to predict how much capacity you might need, the traditional model of purchasing infrastructure upfront often results in provisioning capacity that’s too high or too low. By moving to a consumption-based Infrastructure as a Service (IaaS) model and eliminating over provisioning, Forrester found that HPE customers experienced a 30% savings. At this dinner you'll hear how you can utilize HPE GreenLake consumption services to save IT costs while maintaining in-house control of mission-critical workloads and creating a usage based on-site compelling on-premises cloud experience.
Migrating to Cloud: Inhouse Hadoop to Databricks (3)Knoldus Inc.
Modernize your Enterprise Data Lake to Serverless Data Lake, where data, workloads, and orchestrations can be automatically migrated to the cloud-native infrastructure.
MT101 Dell OCIO: Delivering data and analytics in real timeDell EMC World
Today’s business operations increasingly rely on sophisticated integration of data streaming across the enterprise. This requires an analytics ecosystem that is highly current and highly available. This session explores the infrastructure and methods Dell IT used for keeping the complex flows, integration processes, BI, and analytics operating 24x7.
Are You Prepared For The Future Of Data Technologies?Dell World
We are increasingly coming upon an age where technology is a strong enabler of business success, where there are strong synergies between business strategy and technology strategy. You often cannot discuss business strategy without data and related technologies being a big part of it. And as such, business leaders are increasingly turning to IT to compete more effectively in the market. As IT management, it falls upon you to ensure that your data technology architecture (software & hardware) is built in a way that it can handle the business demands of today and in to the future. In this session, we will discuss the various big data technology architectures and associated tools, and what role each should play in your data environment. We will also give real life examples of how others are using these technologies. Build a better data architecture, to unlock the power of all data.
MT11 - Turn Science Fiction into Reality by Using SAP HANA to Make Sense of IoTDell EMC World
Data collected from the “Internet of Things” is a reality, flooding data centers at a rapid pace! But how can you take advantage of that data in real-time? Join this session to examine how Connected Business with Dell and SAP puts that data to work for you - on-premise or cloud - to build solutions that glean real-time insights from IoT
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...Citrix
Project Avalon Excalibur delivers a powerful new unified architecture for simple, streamlined deployment of Windows apps and desktops across multiple server and desktop OS platforms. But we haven't stopped there. A successful deployment requires tools to monitor virtual desktops and applications. This session will cover streamlining of support functions, new analytics for performance management and capacity planning and best practices for aligning IT admin roles.
MT12 - SAP solutions from Dell – from your Datacenter to the CloudDell EMC World
SAP HANA has accelerated the pace of innovation – an in-memory platform that runs analytics applications smarter, business processes faster, and data infrastructures simpler. Join this session to learn how Dell EMC offers the broadest solutions for SAP HANA for any sized customer with the best performance and the most customer choices- from on-premise to hybrid to cloud.
Data Vault 2.0 DeMystified with Dan Linstedt and WhereScapeWhereScape
Join Dan Linstedt and WhereScape to learn the benefits that Data Vault 2.0 offers to data warehousing teams, what it is and isn't, and how data vault automation can help teams implement Data Vault 2.0 more quickly and successfully.
According to IDC “By 2020, consumption-based procurement in data centers will have eclipsed traditional procurement. Reducing waste, risk, and cost are goals of every IT organization. But because it’s hard to predict how much capacity you might need, the traditional model of purchasing infrastructure upfront often results in provisioning capacity that’s too high or too low. By moving to a consumption-based Infrastructure as a Service (IaaS) model and eliminating over provisioning, Forrester found that HPE customers experienced a 30% savings. At this dinner you'll hear how you can utilize HPE GreenLake consumption services to save IT costs while maintaining in-house control of mission-critical workloads and creating a usage based on-site compelling on-premises cloud experience.
Cuenta con 12 dinámicas distribuidas en presentación, auto conocimiento y análisis; gracias a la colaboración de estudiantes de la carrera de Trabajo Social y Desarrollo Humano de la Universidad Católica de Santiago de Guayaquil
Proposal Project Title Relationship between Mo.docxamrit47
Proposal
Project Title:
Relationship between Money and Time with Virtualization Technology.
The Project Aims and Objectives:
This project helps to examine the virtualization as technology, and it will help
the consumers by understanding its behaviour, the concepts and the
experience to improve it and provide the industry with some research oriented
concepts which will definitely move towards the success of information
technology and help us to grow in the domain of our respective technologies.
This project will help us in getting the knowledge on the esteemed projects
where the technology will not only help in the growth of virtualization areas but
it will also help the technology to improve and access the areas where the
technology is providing the better access to the world.
Project Outline:
This project will consists of the virtualization technology, its concepts,
throughputs, examines, awareness, areas, and certain fields where we can
apply this technology. Currently most of the organization are dealing with the
cloud computing as a virtualization technology, this virtualization technology
not only helps the system but the organization to develop a system which will
not fulfil the requirement of end consumer but also the aspects, orientation
and research ability of the industry.
There are many different areas which can be covered in the virtualization
technology and it may vary from one aspect to another, such as, in cloud
computing there is an aspect of Iaas, Paas, Saas which will help an
organization to cover the different aspects of cloud computing in an industry.
Industry wide there are different uses of virtualization as a technology, some
industry experts see Operating systems virtualization as the concept, some
see Cloud Computing, whereas other see the Data Virtualization as the
concept, all these concepts not only define the parameters for the industry
wide acceptance of virtualization as a technology, but also helps to expand
the virtualization as a technology to help and grow with the Information
technology as an industry.
Literature Survey / Resources’ List:
The term Virtualization of technology means building of a virtual version of
software, operating system, network resource etc. rather than actual version.
Virtualization essentially allows you to do a lot more with less effort. Virtual
machine let you share the workstations without getting worried about the
resources required to maintain those machines.
When we talk about any company’s perspective, we need to figure out what
changes they have to do to perform Virtualization enabled in their company.
Here I am not providing any name of organization, but it refers to some
organization who has implemented Virtualization and the growth of
organization is significantly improved.
In the process of Virtualization, software is used to change the appearance of
things. The differe ...
ARMnet is a completely client centric Financial Product management Solution. Built on a CRM core, ARMnet can manage any Loan, Lease, Wealth Management or Mortgage Origination or Servicing requirements in a totally SOA architecture.
ARMnet Integrated Financial Product Management solution is a CRM based software toolset that allows us to create and manage any type of loan, mortgage origination or servicing or wealth management product and service it through it\'s life cycle. The unique Service Oriented Architecture of the application is defined in the article.
- 56 -Project TitleRelationship between Money and Time wi.docxmercysuttle
- 56 -
Project Title:
Relationship between Money and Time with Virtualization Technology.
The Project Aims and Objectives:
This project helps to examine the virtualization as technology, and it will help the consumers by understanding its behaviour, the concepts and the experience to improve it and provide the industry with some research oriented concepts which will definitely move towards the success of information technology and help us to grow in the domain of our respective technologies. This project will help us in getting the knowledge on the esteemed projects where the technology will not only help in the growth of virtualization areas but it will also help the technology to improve and access the areas where the technology is providing the better access to the world.
Project Outline:
This project will consists of the virtualization technology, its concepts, throughputs, and certain fields where we can apply this technology. Currently most organizations are dealing with the cloud computing as a virtualization technology, this virtualization technology not only helps the system but the organization to develop a system which will not fulfil the requirement of end consumer but also the aspects, orientation and research ability of the industry.
There are many different areas which can be covered in the virtualization technology and it may vary from one aspect to another, such as, in cloud computing there is an aspect of Iaas, Paas, Saas which will help an organization to cover the different aspects of cloud computing in an industry.
Industry wide there are different uses of virtualization as a technology, some industry experts see Operating systems virtualization as the concept, some see Cloud Computing, whereas other see the Data Virtualization as the concept, all these concepts not only define the parameters for the industry wide acceptance of virtualization as a technology, but also helps to expand the virtualization as a technology to help and grow with the Information technology as an industry.
Literature Survey / Resources’ List: Comment by Karlyn Barilovits: This is good information, but you need to include at least 5 -7 citations.
The term Virtualization of technology means building of a virtual version of software, operating system, network resource etc. rather than actual version. Virtualization essentially allows you to do a lot more with less effort. Virtual machine let you share the workstations without getting worried about the resources required to maintain those machines. When we talk about any company’s perspective, we need to figure out what changes they have to do to perform Virtualization enabled in their company.
In the process of Virtualization, software is used to change the appearance of things. The different things such as hardware resources, Operating System, storage products and network server can be virtualized. The main aim of Virtualization is to relocate its business data and application into redundant ...
Creating a Successful DataOps Framework for Your Business.pdfEnov8
As data is universally important and has a major role in decision-making and other business operations, a strong data-driven culture has become extremely important for business organizations.
This calls for a successful and efficient DataOps framework. Let us explore more about this emerging methodology.
Making Multicloud Application Integration More EfficientCognizant
With the dramatic, ever-growing increase of companies migrating applications and data to public and private clouds, the integration of cloud and on-premises applications is both absolutely essential and extremely complex. We offer a brief roadmap to establishing a "cloud console" for integrating multicloud environments.
The Journey Toward the Software-Defined Data CenterCognizant
Computing's evolution toward a software-defined data center (SDDC) -- an extension of the cloud delivery model of infrastructure as a service (IaaS) -- is complex and multifaceted, involving multiple layers of virtualization: servers, storage, and networking. We provide a detailed adoption roadmap to guide your efforts.
2020 Cloud Data Lake Platforms Buyers Guide - White paper | QuboleVasu S
Qubole's buyer guide about how cloud data lake platform helps organizations to achieve efficiency & agility by adopting an open data lake platform and why data lakes are moving to the cloud
https://www.qubole.com/resources/white-papers/2020-cloud-data-lake-platforms-buyers-guide
As the Vice President, Datacenter Architecture at Presidio, William Turner, PhD has more than 20 years of hands-on, full-project-cycle experience in strategizing, designing and deploying large-scale Fortune 500 networks and security solutions. His extensive background in banking, security,
and government has yielded several well regarded industry standards and noted reference models.
Dr. Turner envisions and drives a future in which sophisticated software provisions and de-provisions IT infrastructure automatically in response to business needs. The specialized appliances enterprises traditionally rely upon will be replaced by industry-standard hardware playing necessary roles on demand.
EAPJ conducted this interview from the perspective of an infrastructure architect considering a software-defined future for the networking, hosting and storage underlying a major upcoming application investment.
Similar to Map r whitepaper_zeta_architecture (20)
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
2. ®
Data processing in the enterprise quickly shifts from “good enough” to “we need more and faster” as
expectations grow. The Zeta Architecture is an enterprise architecture which enables simplified business
processes and defines a scalable way for increasing the speed of integrating data into the business.
There will be no successful path to the future without understanding and appreciating history, which is
why it is of the utmost importance to understand how the current state of enterprise architectures have
come about.
While building out a data center, resources are often thought about as pools of servers where each pool
will meet the needs of a specific use case. Lines are created between the pools of servers, resulting in
static partitions. They are static in the sense that the resources cannot grow dynamically. The growth in
any particular partition over time has no direct effect on the other partitions. This partitioning model
simplifies troubleshooting to identify when something is failing within one of those static partitions.
Static partitioning enables a simplified way to calculate the theoretical maximum throughput of the
software running in that partition, which means capacity planning is pretty straightforward. Engineering
teams are usually pretty concerned about understanding the capacity of the software. The IT operations
team usually needs to understand where to add capacity for future growth. This information will give
you a maximum for your volume, for your compute and for your memory. Most use cases will never real-
ize complete utilization of resources in all given pools, and this is due in part to the workload imbalance
created by static partitioning.
Resource isolation is a big deal, and as nearly every engineer will attest to, fast troubleshooting is very
important. Production or IT operations, development, and QA all need mechanisms to isolate issues so
Zeta Architecture
Introduction
MapR Technologies, Inc.
White Paper, March 2015
continued on next page
A Brief History of Enterprise
Architectures
3. ®
2 MapR Technologies, Inc.
Zeta Architecture
A Brief History of Enterprise Architectures
continued
Isolated Workloads
Come at a Cost
they may understand where a problem originates. They also want to understand if it is one or multiple
issues. Their goal is to quickly track down and identify an issue, deploy a fix, and ensure that the problem
has been resolved.
Business continuity encompasses the topics of what keeps your business in business. We’ve got to
make sure that we don’t forget about things like backups and the schedules that come along with these.
Disaster recovery plans should be in place not only for peace of mind, but to ensure that businesses can
continue on in the face of the unexpected. Backup plans are generally defined against each of the static
partitions, which tend to include how to recover from a single server lost to the entire data center. Most
of these plans, which include plans for recovery, will have different levels of outage preparedness, going
from hours to days of downtime. Clearly every business can benefit from having rock-solid plans and
processes for outages.
One of the most notorious issues of isolated workloads is wasted capacity and wasted energy. Think
about a common use case like web servers. Take an instance where a business uses about 10 web serv-
ers running at 5% utilization (very normal for web servers), delivering web content with a load balancer
sitting in front so it can handle traffic spikes. If utilization is nearly always below 10%, that leaves 90% as
constantly wasted resources. Not only is there a capital cost for the 10 web servers, but when factoring in
the energy costs, it becomes an even bigger deal. It sure would be nice to get better utilization of capital
for the business.
Isolation of resources isn’t free, because every server needs to be monitored. Underutilized hardware
consumes more than just energy; it also consumes time to manage them by keeping them secure and
up-to-date.
Processes to move data from servers generating information to servers processing information tend to be
rather complicated to setup and manage. Beyond the processes, they normally require people to monitor
them around the clock. These jobs typically sit in a very high profile position in a business workflow; if
they are tied to revenue generation and any of them fails, you may have to answer to your customers.
In any good agile deployment process, there is a desire to promote software between any number of
environments to support the business. Promoting software between environments is tricky, because
environments tend to come in different shapes and sizes and usually do not contain the name number
of servers per pool in a development environment as they would in a production environment. Given 3
servers in QA and 100 servers in production, is there a guarantee that the code that was tested is going
to act the same way in production? Most assuredly not. Most people have probably lived through this
scenario when going into a production environment. This is perhaps one of the most difficult and least
fun things to troubleshoot.
continued on next page
4. ®
3
The Model of This
New Architecture
Goals with a New Approach The first goal with a new enterprise architectural approach should be the ability to leverage all existing
hardware in the data center. This would enable resources to be put on any business problem at any time.
There is still a need to maintain some form of isolation that meets the needs discussed in the current model.
The requirements for moving software between environments need to be understood, and the processes
need to be able to accommodate the new architecture and deliver more than what already exists.
Backing up data for point-in-time recovery, or from tape or any other form of backup, needs to be
improved in terms of what exists today. Too many architectures do not deliver any real added benefits
for disaster recovery, and the restoration processes for a serious disaster could take weeks. The goal of
this new approach should be to support real-time business continuity. This would mean that in the face
of a disaster, recovery—if any—should be able to be accomplished within a time frame in line with high
availability expectations (e.g. 99.9% or better). That is to say, this architecture will deliver the ability, but
the onus is still on the implementer to know how many nines are necessary for the business.
A cohesive security and compliance model including authorization and authentication should be consid-
ered to make management of systems easier and less prone to error. All the components have to be able
to work with the same security controls. Users, jobs, and data need to be secured. We must ensure that
even the most stringent regulatory environments are able to use this architecture.
The high level component view of this architecture is intended to support the goals defined for this new
architecture. It is not intended to dictate which specific software or project, open source or otherwise,
must be used. There are seven pluggable components of this new architecture, and all of the components
must work together:
• Distributed File System. Utilizing a shared distributed file system, all applications will be able to read
and write to a common location which enables simplification of the rest of the architecture.
• Real-time Data Storage. This supports the need for high-speed business applications through the use
of real-time databases.
• Pluggable Compute Model / Execution Engine. Different groups within a business have different
needs and requirements for meeting the demands put upon them at any given time, which requires the
support of potentially different engines and models to meet the needs of the business.
• Deployment / Container Management System. The need for having a standardized approach for
deploying software are important and all resource consumers should be able to be isolated and
deployed in a standard way.
• Solution Architecture. This focuses on solving a particular business problem. There may be one or
more applications built to deliver the complete solution. These solution architectures generally encom-
pass a higher level interaction among common algorithms or libraries, software components and
business workflows. All too often, solution architectures are folded into enterprise architectures, but
there is a clear separation with the Zeta Architecture.
continued on next page
MapR Technologies, Inc.
Zeta Architecture
5. ®
4
The Model of This New Architecture
continued
• Enterprise Applications. In the past, these applications would drive the rest of the architecture.
However, in this new model there is a shift. The rest of the architecture now simplifies these
applications by delivering the components necessary to realize all of the business goals we are defining
for this architecture.
• Dynamic and Global Resource Management. Allows dynamic allocation of resources to enable the
business to easily accommodate whatever task is the most important that day.
As we look at what technologies can fit in here, we’re basically going to start right in the middle. Mesos
is a data center-wide resource manager; YARN is a resource manager for functionality that lives in the
Hadoop ecosystem. When used alone they create silos of clusters. To get around this, project Myriad can
be utilized. Myriad enables Apache Mesos to manage YARN. When combined, these resource manage-
ment tools bring all the resources into a single cluster.
There is flexibility available within the area of the distributed file system. If running on a cloud provider
like Amazon, there is S3. Within a private data center, there is MapR-FS or HDFS. What is important to
understand is these functionalities/capabilities are going to be the foundation of the rest of this architec-
ture. While MapR-FS implements all of the APIs supported by HDFS, it delivers far more functionality
that is not available within HDFS.
Real-time applications require guarantees on data retrieval and storage. This will include technologies
like MapR-DB, which fully implements the HBase APIs as well as HBase. While this is the area that Cas-
sandra and MongoDB would fall, they are not referenced in this architecture because they do not support
running on distributed file systems like MapR-FS or HDFS. While it is conceivable that they could be
adapted to run here, their self-limitation is what prevents them from participating in this architecture.
The compute model / execution engine is where the biggest opportunity shows up from an analytics and
streaming perspective. In general, more than one at a time will be used to cover multiple use cases, and
continued on next page
Example technologies that fit into the Zeta Architecture
MapR Technologies, Inc.
Zeta Architecture
6. ®
5
The Model of This New Architecture
continued
they need to support the distributed file system to leverage all of the compute power. This enables prob-
lem solving with multiple technologies including using Hadoop MapReduce, Apache Drill, Apache Spark
or any others than can work with this distributed file system. The other benefit here is that when com-
bined with the global resource management and full access to all the data, those who perform analytics
work can have full access all hours of the day where the resources can be constrained or expanded based
on production utilization.
The containers portion of this architecture is important, as it delivers a type of isolation that is important
in certain use cases. The isolation provided by containers gives the ability to move software more easily
from development to QA to production. Mesos ships with its own container system, but it also supports
Docker and Kubernetes. This provides a better process model, which helps to ensure consistent software
between environments.
In the solution architecture space there are concepts like machine learning, recommendation engines
or even the Lambda architecture. These are solution architectures that are going to leverage this
platform, and you need to be able to describe them in a way that is more specific than the enterprise
architecture itself.
The simplest example of an enterprise application that could be used here is a web server. Take an
Apache web server deployed in a container that is configured to write its logs straight through to the
distributed file system. This bypasses log shipping and allows for the data to be processed or analyzed
immediately, without delay.
Google’s example
This architecture will allow anyone who implements it to be able to run at Google scale. As a point of
reference, here is a mapping of Google onto this architecture.
Implementations
continued on next page
Technologies Google leverages laid over the Zeta Architecture
MapR Technologies, Inc.
Zeta Architecture
7. ®
6
Let’s take a look at a few interesting points regarding Google’s technologies in this diagram: Borg is
sometimes referred to as the “project that is unnamed” within Google, but outside of Google it’s called
Borg. Omega is their scheduler and they define it as the crux of the entire distributed processing plat-
form, as it is figures out where and when to place jobs. From a solution architecture perspective, Gmail
conceptually operates on top of a recommendation engine. The machine learning concepts in general are
delivered in many of their product offerings.
Take a step back for a moment to understand all of the components that comprise a familiar application.
It is probably implemented with many of these same concepts. The question is, “Does the application
leverage all of these in a heterogeneous way?”
Ad Serving (Recommendation Engine) Example
Web servers and advertising make good implementation examples, as they are a cornerstone of the inter-
net. The high-level architecture for such applications is not overly complicated.
At almost every tier of this application architecture, there are logs emitted and collected. Collecting those
logs is important to advertising, as they are used to generate revenue calculations as well as analytics on
the performance of advertisements. This will create a feedback loop to optimally tune the advertising
engine. In general, this diagram is not overly complicated and it should make sense. When given the
opportunity to lay the application architecture on top of the new Zeta Architecture, there are a number
of simplifications that occur.
Now the web server, advertising engine, analytics execution engine, distributed file system and real-time
data store are all running on each server or in any combination necessary based on load requirements
and how many instances are dynamically started. The first benefit is that the logs generated by the web
server and advertising engine land directly on the distributed file system. Since the data is landing where
it is processed, the execution engine doesn’t have to wait for Flume processes to move the data. This
also means there are no people monitoring the Flume processes to ensure data makes it to the analytics
cluster in a timely fashion.
Implementations continued
continued on next page
Generalized Digital Advertising Platform Architecture
MapR Technologies, Inc.
Zeta Architecture
8. ®
7
The users that advertisements are being generated for are coming straight out of the real-time data store
and going right back in after modifications are made. This puts the data much closer to the advertising
engines where it is used.
Notice that the billing system is located in a relational database (RDBMS) outside of the core distributed
file system. That isn’t a requirement; it is just the most common scenario.
All processes running in the data center should be broken into two groups. The first group are those
which offer resources. Global resource manager (CPU and memory) and the distributed file system
(disk space and I/O) offer resources. The second group are those which consume resources. Web servers,
Apache Drill, and Apache Spark, among others, are all resource consumers. Resource consumers should
be containerized, whereas those offering resource should never be containerized.
Integrating business applications into this architecture requires plugging into standard APIs. Many
custom adapters have been written to work with the HDFS API; however, most integrations require some
sort of custom plugin to be able to fully utilize HDFS. On MapR-FS, there is native NFS support. In this
case, any application that can read / write to an NFS mount can plug into this architecture. The added
benefit of this approach is that when the application plugs in with these standards, the data is automati-
cally replicated by the distributed file system.
A pluggable security model is required, as applications come in many varieties, and to expect them all to
implement the same security model is highly unlikely. Linux pluggable authentication modules (PAMs)
are very convenient in most cases, as there is a tremendous amount of flexibility. Kerberos is an option
here, but it is not perfect as a solution to long-running jobs.
While many RDBMSes have the potential to work in this model, most do not openly support these dis-
tributed file systems. Some have their own, but those are explicitly for that product’s use. Some will work
just fine over a native NFS adapter, while others may not. If the RDBMS of choice supports this, there is a
great opportunity if the data format can be read by the analytics execution engine.
Implementations continued
continued on next page
Integration into the
Zeta Architecture
MapR Technologies, Inc.
Zeta Architecture
Digital Advertising Platform on the Zeta Architecture
9. ®
8
Historically, data analytics teams usually get the short straw when it comes to getting resources. They
don’t typically get access to production systems. Generally they have to get data dumps and have access
to less than adequate compute resources. This model enables that part of the business by allowing them
to participate in this new type of isolation and have dynamic access to the globally managed resources.
Nearly every application architecture needs to concern itself with many different things, including data
protection schemes, how to backup data, recovery from failures, and running multiple instances of soft-
ware. The Zeta Architecture simplifies those application architectures because it delivers many of those
pieces, which means there’s less stuff to go wrong. Fewer moving parts means fewer potential failure
points. Better hardware utilization means less to operate and lower operational costs. The business is
then capable of leveraging a global set of resources to solve any problem based on what is most impor-
tant right now. Priority number one can change quickly in any business.
Resilience is extremely important in an application architecture. The Hadoop ecosystem components
help to protect against disk and server failure. However, they don’t protect against people making
mistakes. With a statically partitioned model, backups are usually only completely performed once per
week, with partials performed nightly. Recovering from those takes significant time. In this new model,
recovery is easier to plan for and more resilience is available in the system. This is due primarily to near
real-time backups being available, as well as utilizing features of the distributed file system.
Occasionally there is a need to stream data in as opposed to waiting for some periodic interval of time
before processing the data. If an application architecture calls for acting on each and every event that
may occur in a log file in real time, then streaming should be considered. This would fall into the plug-
gable compute model / execution engine portion of the Zeta Architecture and it may or may not be
considered “analytics” based.
For this use case, there are a few options available. The first is to setup the stream processing engine with
a source that tails the log file from the distributed file system. The second approach is for the application
generating the logs to write the log information to some type of agent that can persist the log to disk and
send it to the streaming processing engine simultaneously. The final approach is to skip the disk alto-
gether and send it directly to the stream processing engine or some queue sitting in front of it. Each of
these approaches is going to have varying benefits / tradeoffs, all of which should be considered before
making a selection.
continued on next page
Zeta Simplifies
Application Architectures
Streaming Applications
MapR Technologies, Inc.
Zeta Architecture
Integration into the Zeta Architecture
continued
10. ®
9
The benefits of the Zeta Architecture are plentiful. Google relies on this architecture for their entire
company. This architecture will deliver an edge to everyone. Google also performs over two billion
container deployments per week. Containers will help deliver the isolation needed to be able to move
into the future. This architecture gives any company who uses it a competitive advantage. Google has
pioneered this architecture and it has served them very well. The Zeta Architecture will become the
traditional way of thinking to build and deploy software in the data center, whether on-premise or
hosted. This is the model to create an as-it-happens business—one that can sense and respond in real
time to its environment.
To read a summary of the business benefits of utilizing this architecture, or for a summary document to
share with people who don’t want as many technical details, download the Building the Data Centric
Enterprise white paper.
MapR Technologies, Inc.
Zeta Architecture
Summary
continued on next page