A methodology I developed a while back, for more of a military application, that I'm not revamping to fit a consumer model. I thought I would share the presentation, in the hopes that it will spark some interest in conversations, and maybe educate the public, not only on cloud computing as a whole, but also that bursting as it is portrayed, is not only a public cloud resource.
Reflexoes espirita sobre o controle das emoções com base nas advertências do Cristo a semeadura é livre .... O Dialogo de Emmanuel e Chico Xavier quanto as disciplinas das emoções . necessidade de disciplinar o pensar e o agir nas relações cotidianas .
Reflexoes espirita sobre o controle das emoções com base nas advertências do Cristo a semeadura é livre .... O Dialogo de Emmanuel e Chico Xavier quanto as disciplinas das emoções . necessidade de disciplinar o pensar e o agir nas relações cotidianas .
Using Time Window Compaction Strategy For Time Series WorkloadsJeff Jirsa
Cassandra is a great fit for high write use cases, which makes it a popular choice for storing time series and sensor-collection workloads. At Crowdstrike, we've been using Cassandra for just that purpose, collecting petabytes of expiring time series data. In this talk, I'll discuss compaction in time series workloads, and the TimeWindowCompactionStrategy we developed specifically for this purpose. I'll detail TWCS specific configuration properties, some lesser known compaction sub-properties that apply to all compaction strategies, and also cover other general tricks and tuning that are useful for very large time-series workloads.
Neste livro vamos estudar os fatos relativos à pesquisa do cristianismo que, no meu entendimento, estão pouco disponíveis em língua portuguesa. Sei, perfeitamente, que essa forma poderá causar certa frustração nas pessoas interessadas, que acessarem este livro, mas por outro lado, a chama deste conhecimento estará mais rapidamente disseminada.
Procurarei sempre fazer as interrupções em pontos que não prejudiquem o entendimento, evitando quebras bruscas na continuidade, o assunto seguinte será sempre um tópico, tanto quanto possível, se não independente, em continuação cronológica, histórica ou sobre fatos paralelos ao assunto anterior.
Slides: Success Stories for Data-to-CloudDATAVERSITY
Companies are finding accessing data from a variety of sources can be labor-intensive and costly. Oftentimes these companies are looking to cloud solutions, but are then finding the traditional architecture brittle when trying to move data to the cloud, which can drain organizations of time and resources.
Join this webinar to hear several company success stories, the data-to-cloud issues they were encountering, and the steps these companies took to bring their cloud architecture to a successful, real-time analytic solution unlocking massive amounts of fresh enterprise-wide on a continuous basis.
In addition, you will learn how to:
• Modernize the ETL process to one that’s fast, flexible, and scalable
• Supply users with up-to-date, accurate, trusted data
• Increase your time to value with data in the cloud
• Best practices on how to minimize resource overhead
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Following Matei Zaharia’s keynote presentation, join this session for the nitty gritty details. Tableau is joining forces with Databricks and the Delta Lake open source community to announce Delta Sharing and the new open Delta Sharing protocol for secure data sharing. For Tableau customers, Delta Sharing simplifies and enriches data, while supporting the development of a data culture. Join this session to see a live demo of Tableau on Delta Sharing. Tableau customers can choose between 2 workflows for connection. The first workflow is called “Direct Connect,” which leverages a Tableau WDC connector. The second workflow involves using a hybrid approach for querying live on the Delta Sharing protocol and using Tableau Hyper in-memory data engine for fast data ingestion and analytical query processing.
Traditional data storage and analytic tools no longer provide the agility and flexibility required to deliver relevant business insights. That’s why organizations are shifting to a data lake architecture. This approach allows you to store massive amounts of data in a central location so it's readily available to be categorized, processed, analyzed, and consumed by diverse organizational groups. In this session, we’ll assemble a data lake using services such as Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, and AWS Glue.
Dumitruconstantindulcan mintea de dincolo-ana daniela
Într-o accepţiune generală, prin termenul de „dincolo”
se înţelege dimensiunea invizibilă situată dincolo de
universul fizic în care ne aflăm. Este dimensiunea în care,
ceea ce numim suflet sau spirit sălăşluieşte după moartea
corpului nostru.
Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao
Using Time Window Compaction Strategy For Time Series WorkloadsJeff Jirsa
Cassandra is a great fit for high write use cases, which makes it a popular choice for storing time series and sensor-collection workloads. At Crowdstrike, we've been using Cassandra for just that purpose, collecting petabytes of expiring time series data. In this talk, I'll discuss compaction in time series workloads, and the TimeWindowCompactionStrategy we developed specifically for this purpose. I'll detail TWCS specific configuration properties, some lesser known compaction sub-properties that apply to all compaction strategies, and also cover other general tricks and tuning that are useful for very large time-series workloads.
Neste livro vamos estudar os fatos relativos à pesquisa do cristianismo que, no meu entendimento, estão pouco disponíveis em língua portuguesa. Sei, perfeitamente, que essa forma poderá causar certa frustração nas pessoas interessadas, que acessarem este livro, mas por outro lado, a chama deste conhecimento estará mais rapidamente disseminada.
Procurarei sempre fazer as interrupções em pontos que não prejudiquem o entendimento, evitando quebras bruscas na continuidade, o assunto seguinte será sempre um tópico, tanto quanto possível, se não independente, em continuação cronológica, histórica ou sobre fatos paralelos ao assunto anterior.
Slides: Success Stories for Data-to-CloudDATAVERSITY
Companies are finding accessing data from a variety of sources can be labor-intensive and costly. Oftentimes these companies are looking to cloud solutions, but are then finding the traditional architecture brittle when trying to move data to the cloud, which can drain organizations of time and resources.
Join this webinar to hear several company success stories, the data-to-cloud issues they were encountering, and the steps these companies took to bring their cloud architecture to a successful, real-time analytic solution unlocking massive amounts of fresh enterprise-wide on a continuous basis.
In addition, you will learn how to:
• Modernize the ETL process to one that’s fast, flexible, and scalable
• Supply users with up-to-date, accurate, trusted data
• Increase your time to value with data in the cloud
• Best practices on how to minimize resource overhead
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Following Matei Zaharia’s keynote presentation, join this session for the nitty gritty details. Tableau is joining forces with Databricks and the Delta Lake open source community to announce Delta Sharing and the new open Delta Sharing protocol for secure data sharing. For Tableau customers, Delta Sharing simplifies and enriches data, while supporting the development of a data culture. Join this session to see a live demo of Tableau on Delta Sharing. Tableau customers can choose between 2 workflows for connection. The first workflow is called “Direct Connect,” which leverages a Tableau WDC connector. The second workflow involves using a hybrid approach for querying live on the Delta Sharing protocol and using Tableau Hyper in-memory data engine for fast data ingestion and analytical query processing.
Traditional data storage and analytic tools no longer provide the agility and flexibility required to deliver relevant business insights. That’s why organizations are shifting to a data lake architecture. This approach allows you to store massive amounts of data in a central location so it's readily available to be categorized, processed, analyzed, and consumed by diverse organizational groups. In this session, we’ll assemble a data lake using services such as Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, and AWS Glue.
Dumitruconstantindulcan mintea de dincolo-ana daniela
Într-o accepţiune generală, prin termenul de „dincolo”
se înţelege dimensiunea invizibilă situată dincolo de
universul fizic în care ne aflăm. Este dimensiunea în care,
ceea ce numim suflet sau spirit sălăşluieşte după moartea
corpului nostru.
Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao Os segredos da respiracao
Wij zijn een praktisch Verandermanagementbureau met een drive voor excellente uitvoering. Onze aanpak zorgt voor een betere samenwerking tussen afdelingen en gaat verspilling in bedrijfsprocessen tegen. Het gevolg is betrokken medewerkers, snellere klantbediening en een beter financieel resultaat.
We baseren onze werkwijze op erkende methodes als Lean en Prince 2. Met onze aanpak wordt Lean weer LeaRn. Bij ons geen uitgebreide powerpoints of theoretische rapporten. Wij gaan direct aan de slag met uw mensen. Daarbij combineren we ‘hard’ skills (feiten & cijfers) en ‘soft’ skills (gedrag). Als we geen workshops houden met uw mensen vindt u NRG-Advicers op de werkvloer. We stellen vragen, luisteren, motiveren en durven te confronteren.
Mooie organisaties* in de service- & maakindustrie hebben wij reeds mogen begeleiden bij het doorvoeren van procesverbeteringen en het invoeren van een cultuur van continu verbeteren.
Het is onze intrinsieke motivatie uw organisatie duurzaam beter te laten presteren. Eerder zijn we niet tevreden!
An introduction to the passive voice and the three reasons people use passive voice: lazy reporting, thing being acted upon is more important than thing doing acting, unknown subject
45 Minutes to PCI Compliance in the CloudCloudPassage
Join CloudPassage CEO, Carson Sweet and Sumo Logic Founding VP of Product & Strategy, Bruno Kurtic, for a webinar on “45 minutes to PCI Compliance in the Cloud”.
What You Will Learn:
-Understand the typical challenges faced by enterprises for achieving PCI on cloud infrastructure
-Learn how purpose-built SaaS-based cloud security solutions can save you tens of thousands in audit costs by speeding your time to compliance
-Get a quick demo of the CloudPassage Halo and Sumo Logic solutions that provide the telemetry and query/reporting engines respectively for cloud PCI
The most trusted, proven enterprise-class Cloud:Closer than you think Uni Systems S.M.S.A.
The Big Decision – What, when, and why?
Enterprises are aware that the Cloud is changing IT, but security and performance remain a concern. Each cloud model has potential risks: reliability, adaptability, application compatibility, efficiency, scaling, lock- in, security and compliance. Companies must select an enterprise cloud solution to suit a complex mix of applications; these decisions require great care. Uni Systems’ Uni|Cloud was built to be enterprise class. The essential reason that many businesses today are using Uni Systems Cloud for their enterprise IT, is because it offers the only enterprise-class cloud solution in the Greek market, designed for mission-critical applications, coupled with application performance SLAs and security built for the enterprise, combined with cloud efficiency and consumption-based pricing/chargeback.
Data Mesh is the decentralized architecture where your units of architecture is a domain driven data set that is treated as a product owned by domains or teams that most intimately know that data either creating it or they are consuming it and re-sharing it and allocated specific roles that have the accountability and the responsibility to provide that data as a product abstracting away complexity into infrastructure layer a self-serve infrastructure layer so that create these products more much more easily.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
Cloud bursting methodology
1. Accolades
Firstly, I’d like to thank everyone attending,
taking the time out of their day, and allowing me
to present…
This methodology was developed for military
application, and therefore this presentation is
the theory of applying this same methodology
to a more consumer based model…
2. Cloud Burst Methodology
• Methodology
• Explanations of Cloud Burst Methods
• Network Access Methodologies – Inclusive of Cloud
Bursting
• Identity Management Methodologies, inclusive of
Cloud Bursting
• Special Purpose Computing – Cloud Bursting
• Business and Use Cases Applied to Special Purpose
Computing
• High Level Architecture Walkthrough
• Q & A
3. Cloud Burst Methodology
Cloud Burst Methodology is the practical theory,
and application of, the formation of an autonomous
cloud computing fabric, intended for the creation of
dedicated fabrics, established for single use, to
officiate a process/mission/directive, after which
that fabrics contents are then warehoused, and the
core destroyed, eliminating any access structure
and/or access to the data. The data is warehoused
in a separate environment, with exclusionary access
rights, different from and in contrast to the rights
assessed within the special purpose fabric.
4. Cloud Burst Methodologies - Explained
The concept of ‘Cloud Burst Methodology’ came
from a project named ‘MVII’ Vehicle Intelligence
Initiative for Mobility, and the use of a technology
terms Ad-Hoc VPN spawning. Ad-Hoc VPN
spawning would occur, as one MVII equipped car
approached another, creating an Ad-Hoc network
VPN, via a spontaneously created network access
key, then piggybacking to the POP (Point of
Presence). The process in which, was
spontaneously created, made the chance of
penetration, or intrusion almost negligible.
5. Network Access Methodologies –
Explained
The key purpose of Network, or Systems security is
to create an environment where security
‘remediation’ was osmotic, establishing a healthy
environment where CISO’s could rectify issues. The
key issue here is the word remediate, meaning to
cure a defect, and/or issue. Even with the current
behavioral ‘AI’ technology being implemented
within standardized Network Security platforms, we
are still ‘remediating’ network penetration issues
rather then proactively avoiding them.
6. Security Implications
Such as, in any network access structure, the
more perpetual time, users, and traffic in/out of
even a contained network unit increases the
possibility of penetration and/or intrusion,
either by an internal, or external entities.
7. Access Management Solutions
Cloud Burst methodologies can, by proxy,
mitigate possible penetrations/intrusions, not by
instantiating new network protocols, but by
limiting access, to single use fabrics, disallowing
perpetual access by utilizing these units for
special purpose computing procedures…
8. Special Purpose Computing Solutions
• Special Purpose (Use) Computing
– Using special purpose fabrics, for single use
computing instances
• Such as transactional processing during high traffic
periods
• Limit Network Accessibility
– By limited access
• Access Control
– By instantiating spontaneous user controls
• Identity Management
9. Access Management Architecture
Special purpose computing reduces the chances
of intrusion by establishing access rights, via
1024 bit encrypted keys, spontaneously created
at instantiation. These key pairs have a half life,
that of the users access capabilities and or
accessibility, and are tied to that specific fabric.
10. Cloud Burst Methodology
• Practical Applications
– Business Cases
– Use Cases
• Best Practices
– Methodology
– Architecture
11. Practical Applications of a Cloud Burst
Methodology
• Originally designed surrounding a military
application
– Other potential applications
• High traffic, high volume shopping seasons
– PCI, and SOX compliancy packages
• Compliancy driven arenas, such as health care,
specifically in disaster situations
– HIPAA, PCI, and SOX compliancy packages
• Mergers and Acquisitions
– SOX, and other compliancy packages
12. Business Case – Retail
Problem: Majority of credit, checking account, and
identity theft occurs during high traffic seasonal shopping
periods
• Christmas, Thanksgiving, Easter, New Release, other
applications
Solution: Using a derivative of a Cloud Burst
Methodology, onboarding compliancy packages, you can
process transactions, utilize CRM packages, create
management protocols, and destroy the fabric after use,
thereby eliminating all access control assigned to that
fabric, and any other associations…
13. Business Case – Medical HIPAA
Problem: In 2014 the clinical application, of the new HIPAA laws, will come into effect.
HIPAA will now not only govern patient care, but also the application of new laws
surrounding DLP, Network Security, and systems hardening…
• Patient file storage
• Patient demographic storage
• Medical records storage
• Patient care applications
Solution: Using a derivative of a Cloud Burst Methodology, onboarding compliancy
packages, you can establish special purpose cloud to officiate disaster initiatives, such
as Hurricane Katrina, or Sandy, for the soul purpose of medical care, storage of
documents, patients data, morgue and autopsy data. This would allow that facility to
bring a fully functioning, prebuilt fabric, up in minutes. This in turn would allow
unfettered access, limited by the half-life of the assessors credentials, for a certain
time period, and/or until that fabrics life has expired; in turn allowing life saving
medical information to be shared, safely and in a secure environment, for the time
allotted.
15. Introduction to the Stack
• Intro to hardware and support infrastructure
• Hardware
– Vendor Blade Servers
• HP, IBM, Cisco choices, the original theory was to utilize
HP equipment, i.e. 25 ‘C’ series HP Blade Servers
• Software
16. Stack Architecture
• Hardware
– Vendor Blade Servers
• HP, IBM, Cisco choices, the original theory was to utilize HP
equipment, i.e. 3 pools, consisting of, 24 (8 per chassis per
pool) ‘C’ series HP Blade Servers, in a (3) NetApp 500 TB
Flexpod Configurations
• Software
– Orchestration and Server Automation
– Access Management, Identity Management
– Management Portal
– User Authentication Portal
– User Environment
17. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
18. Management Environment
• Administration Portal
– Controls Orchestration
• Creation of Flows
• Delivery of Special Purpose Infrastructure
– Access Control
• Management of Access Control
– Identity Management
• Control and input of Identity Management Environment
19. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Admin
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
20. Special Purpose Computing
Environment Instantiation
• Request would be initialized by internal ticket
– Flow would be built
• Providing the functionality, and in this case compliancy
packages, were pre-built, would then be injected into the
flow
– Package retrieval
• Golden (or Root) images, originally a mission protocol, built
into image form, with precise locations, mission status,
mission scope, so on
– Package Instantiation
• Flow would be initialized and executed, based on
predetermined requirements
• Authentication keys are generated, based on prerequisites
21. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Admin
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
22. Identity Management
• Access Rights
– Keys would then be assigned to mission handlers,
i.e. mission stakeholders, or in this case project
stakeholders
• Identity Management
– Keys would then be assigned to mission executors,
i.e. operatives, in this case project managers
23. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
24. Pass-through or pass-off
• Administrators would then be reassigned, and
ownership passed off to the mission, or in this
case project, stakeholders
– Although the pass-off has take place,
administrators still have some authority for
break/fix scenarios
25. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
27. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
28. Next Steps – Authentication Process
• Mission, or in this case project, stakeholders
assign operators
• Verify requirements
• Identify key processes
• In this case, execute compliancy packages
At this point the half life of the authentication
process has been initiated
29. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
30. Access Rights
• Mission (project) stakeholders initiate
operators requested identities
– Access is granted to operators
31. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
32. Contained User Authentication Portal
• Mission (project) operators take control of the
user environment, and execute requested
protocols
– This pertains to mission, or project status, mission,
or project objectives
33. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1 Fabric Instance 2
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
34. Repetition
• The same protocols and procedures would be
executed, in order, subsequent to further
instantiations…
• After each mission, or in this case project,
concludes, the fabrics’ data is warehoused, and
the core destroyed, taking with it all keys
associated, as well as access rights granted…
– Pass-off is then given back to the administrators, to
access raw data collected
• Event correlation, data mining, so on, is initiated
• Depending on the department and/or organization, internal
handling of the data will differ…
35. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1 Fabric Instance 2
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
User Owner
Instance
Control
Users
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
36. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1 Fabric Instance 2 Fabric Instance 3
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
User Owner
Instance
Control
Users
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
37. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1 Fabric Instance 2 Fabric Instance 3
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
User Owner
Instance
Control
Users
User Owner
Instance
Control
Users
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
38. OPEN API LAYER
Identity Management
Access Control
Process Management
Service Oriented
Architecture
Application Servers/Middleware
Orchestration
Provisioning
Element Management
Event Correlation
Storage Management
Data Warehousing Access
Layer
Contained User Portal Environment
Fabric Instance 1 Fabric Instance 2 Fabric Instance 3
Shared Components (Access Management)
Cloud Bursting Methodology – High Level Reference Architecture
Distributed
Fabrics
Management
Control
Environment
Middleware Clients
API Access Layers
Data
Warehouse
User Owner
Admin
Instance
Control
Users
Fabric Administrators
Physical
Infrstructure
User Owner
Instance
Control
Users
User Owner
Instance
Control
Users
POOLS
NAS NAS NAS
Storage
Network
Provisioning
Compute
Provisioning
Pool 1 Pool 2 Pool 3
Round Robin
Load Balancing
Algorithms
39. Conclusion
As stated, at the beginning, of this presentation,
originally this methodology was created for
purely military application… However I have
seen the necessity to carry it forward to more of
a consumer application, such as fabrics that
further more of a compliancy driven model.
This being said, there would be a need for
business and use cases to determine
sustainability within that model, and
subsequent configuration changes, if need be.
40. Presentation End
Q & A
Ladies and Gentlemen, thank you for your time and
consideration… I look forward to working with you
all, in the near future. Please feel free to contact
me, with any questions…
Jonathan Spindel
Email: jspindel@ieee.org
Phone: (954) 299-2132
Editor's Notes
Firstly, I’d like to thank everyone attending, taking the time out of their day, and allowing me to present, I hope computing theory doesn’t bore you too much… What is cloud computing? Really an old idea, (spanning from the AS 400 series Mainframe days, which touted the early use of ‘Virtualization’, and ‘VDI’, Virtual Desktop Infrastructure), officiated with new technology. The reason I bring this up, is not to go into the typical NIST definition of cloud, as I’m sure we call know and understand the terminology, but to preface this presentation with a little bit of my personal career path, as it applies to this methodology. From the beginning of my career, I have been greatly interested, in what was originally termed ‘Distributed Networking and Computing’, as we know now, to be ‘Cloud’. My first introduction to ‘Cloud’ was in my early institutional years, and a project termed “Planet Lab’, which was created to serve as be a virtual lab, containing virtual slices, hosted by virtualized, light weight, Linux kernels. From there on out, I have pretty much focused on Cloud Computing, and the theory based research of distributed computing…
The concept of a Cloud Burst methodology, as I’ll later explain more in depth, was originally applied to the Israeli Military, on my sabbatical in Israel, in reference to reconnaissance drone missions. Now there is only so much I can digress into the actual military model, however the presentation surrounds the application of this methodology in more of consumer based model. My intention is not to challenge current network or systems security methodologies, rather introduce a new application of those methodologies, in an innovative format.
During this presentation I’d like to touch on several points, to bring across the overall definition of a Cloud Burst Methodology. Firstly I’d like to touch on the Methodology itself, then delve into a brief explanation of the methods, surrounding the methodology itself. After that brief overview, I’ll go into the network access, and identity management methodologies that go into the overall architect, then segue into, what I like to term, Special Purpose, or use, computing… Once we cover the overall methodology, I’ll digress into the business and use cases I felt to be an appropriate vehicle into the consumer marketplace, after which we’ll take a small dive into the high level architecture of the technology viewpoint of cloud bursting. I’ll end the presentation with a short Q & A session, however my objective is to cohesively tie this together, to avoid wasting too much of your valuable time.
An introduction to Cloud Burst Methodology. The purpose of this application is the instantiation, in practical theory, is the formation of an autonomous unit, or special purpose, computing fabric. This fabrics’ creation hinges upon the application of a single use process, to officiate a mission, or in this case project objective, and to create a platform for those operators to work on, for a limited time period, in a safe/secure environment. After the mission/project has been executed and the process has been completed, the fabrics core then is destroyed, storing the raw data in a centralized warehouse, eliminating the access structure created, and or overall access to the data, by the original operators. The data, ultimately stored, in a different environment, with exclusionary access rights, different from, and in contract to, those issued to mission/project stake holders and/or operators.
The idea, I coined a Cloud Burst Methodology, originally came from a DOT (Department of Transportation) project, I work on over a decade ago, named MVII (Vehicle Intelligence Initiative for Mobility. A project designed to use specially equipped vehicles, in more of a MESH network environment, using themselves as bouncing, or repeating points, for perpetual connectivity of the smart highway system. The idea, specifically came from, what is termed an ‘Ad-Hoc VPN, a spontaneous Virtual Private Network created for vehicles, in range, to bounce or repeat their signals, connecting to its’ parent a POP (Point of Presence) located along the freeways. This project was created in theory, and was a research grant, leading to an eventual design, but was before technology was able to support the magnanimity of this design. Nevertheless, I derived the idea, from the theoretical creation of that ‘Ad-Hoc’ VPN…
The purpose of this slide, isn’t to argue the merits of a network/systems security platform, as they are a direct necessity to any secured environment. My main point is to highlight that regardless what protocols we set forth, or precautions we take, we still find ourselves trailing behind individuals who thoughts are to do harm with malice… Whether is be for recreation, the intention of syphoning funds from an account, or the unauthorized access of proprietary data, there will always be elements that intend to breach a network, we still find ourselves remediating those issues, rather then proactively avoiding them. By no means does this mean the parents network should not be protected, only that the fabric itself, since it’s inherent structure is one of spontaneity, simply reduces the risk of potential penetration, yet does not eliminate such risk, ergo that being said, a comprehensive support structure is certainly necessary.
In reference to the last slide, again noting this methodology does not replace a comprehensive security support environment, it does however add to the efficiency, and reduce the chances of infiltration, corruption, or theft of data. Simply put, the more time we perpetually allow the same users, regardless of forced password or username changes, we by proxy, increase the chances of such an event transpiring… This methodology denotes that fact, and allows for the creation of a dynamic accessibility structure, with access rights assigned at instantiation, and after completion those rights are as if they never existed.
The methodology itself was designed to utilize stealth tactics, such as spontaneous instantiation, access rights assigned at creation, utilizing a strong support structure, and destruction of the core, thereby destroying any access rights that were assigned at the flow level.
Special purpose, or use, computing was a term coined in the early 1980’s, much like ‘Distributed Computing’, or ‘Cloud’, I’ve repurposed the term, (A LITTLE TANGENT) originally used for SUN Microsystems DEC VAX computers, in the study of lattice quantum chromodynamics (QCD), essentially the particles, specifically Quarks and Gluons, not to delve too far into particle physics, and truly make this already boring presentation much worse… loosely translated how particles, precisely related to the exchange proton particulates in an electromagnetically changed field. The term seemed to be a perfect fit, considering its’ course, and how this methodology, designed for military use, functioned to assess and execute a single task, such as a mission, project so on…
Special purpose computing reduces the chances of intrusion by establishing access rights, via 1024 bit encrypted keys, spontaneously created at instantiation. These key pairs have a half life, that of the users access capabilities and or accessibility, and are tied to that specific fabric. Not to delve to far into encryption and access related to cryptography, as I don’t want to veer from the overall methodology, as currently the technology has changed, especially with VMware instituting 2048 bit encrypted keys…
The application, or use, in other arenas is a practice I’ve only just started to address. Through some research, and common knowledge, associated with some of the assumptions I’ve made, I opted on three arenas, mentioned on the next slide. In theory, be at as it may, this methodology was originally designed specifically for military application, however as of late, some of the security procedures, being implemented in these three groups have adopted some of the same processes and procedures, making it a prime choice to hypothesize computing theory.
As I’ve explained in the previous slides, Cloud Burst Methodology, was though of, and implemented in a military application. I’ve taken a few different private and public sector business cases, and put them to pen and paper, localizing their potential, as candidates, and categorizing the results based on necessity .
Problem: Majority of credit, checking account, and identity theft occurs during high traffic seasonal shopping periods
- Christmas, Thanksgiving, Easter, New Release, other applications
Solution: Using a derivative of a Cloud Burst Methodology, onboarding compliancy packages, you can process transactions, utilize CRM packages, create management protocols, and destroy the fabric after use, thereby eliminating all access control assigned to that fabric, and any other associations…
Problem: In 2014 the clinical application, of the new HIPAA laws, will come into effect. HIPAA will now not only govern patient care, but also the application of new laws surrounding DLP, Network Security, and systems hardening…
- Patient file storage
- Patient demographic storage
- Medical records storage
- Patient care applications
Solution: Using a derivative of a Cloud Burst Methodology, onboarding compliancy packages, you can establish special purpose cloud to officiate disaster initiatives, such as Hurricane Katrina, or Sandy, for the soul purpose of medical care, storage of documents, patients data, morgue and autopsy data. This would allow that facility to bring a fully functioning, prebuilt fabric, up in minutes. This in turn would allow unfettered access, limited by the half-life of the assessors credentials, for a certain time period, and/or until that fabrics life has expired; in turn allowing life saving medical information to be shared, safely and in a secure environment, for the time allotted.
I was going to write this amazing business case, for the financial sector, however with the IRS scandal, and every other god forsaken scandal out there, no one seems to be paying too much attention to them anymore anyway, so I thought we would just skip this section. No, truthfully, I just didn’t have enough time to do the due diligence necessary…
Intro to hardware and support infrastructure
Hardware
Vendor Blade Servers
HP, IBM, Cisco choices, the original theory was to utilize HP equipment, i.e. 25 ‘C’ series HP Blade Servers
Software
Hardware
Vendor Blade Servers
HP, IBM, Cisco choices, the original theory was to utilize HP equipment, i.e. 3 pools, consisting of, 24 (8 per chassis per pool) ‘C’ series HP Blade Servers, in a (3) NetApp 500 TB Flexpod Configurations
Software
Orchestration and Server Automation
Access Management, Identity Management
Management Portal
User Authentication Portal
User Environment
This is a representation of a full stack, consisting of an several components mentioned before, namely the SOA backbone, infrastructure as a service model, orchestration, automation fabrics, as well as access control, identity management, and other shared components
Administration Portal
Controls Orchestration
Creation of Flows
Delivery of Special Purpose Infrastructure
Access Control
Management of Access Control
Identity Management
Control and input of Identity Management Environment
Request would be initialized by internal ticket
Flow would be built
Providing the functionality, and in this case compliancy packages, were pre-built, would then be injected into the flow
Package retrieval
Golden (or Root) images, originally a mission protocol, built into image form, with precise locations, mission status, mission scope, so on
Package Instantiation
Flow would be initialized and executed, based on predetermined requirements
Authentication keys are generated, based on prerequisites
Access Rights
Keys would then be assigned to mission handlers, i.e. mission stakeholders, or in this case project stakeholders
Identity Management
Keys would then be assigned to mission executors, i.e. operatives, in this case project managers
Administrators would then be reassigned, and ownership passed off to the mission, or in this case project, stakeholders
Although the pass-off has take place, administrators still have some authority for break/fix scenarios
Mission, or in this case project, stakeholders take control of the Special Purpose Computing environment
Mission, or in this case project, stakeholders assign operators
Verify requirements
Identify key processes
In this case, execute compliancy packages
At this point the half life of the authentication process has been initiated
Mission (project) stakeholders initiate operators requested identities
Access is granted to operators
Mission (project) operators take control of the user environment, and execute requested protocols
This pertains to mission, or project status, mission, or project objectives
The same protocols and procedures would be executed, in order, subsequent to further instantiations…
After each mission, or in this case project, concludes, the fabrics’ data is warehoused, and the core destroyed, taking with it all keys associated, as well as access rights granted…
Pass-off is then given back to the administrators, to access raw data collected
Event correlation, data mining, so on, is initiated
Depending on the department and/or organization, internal handling of the data will differ…
As stated, at the beginning, of this presentation, originally this methodology was created for purely military application… However I have seen the necessity to carry it forward to more of a consumer application, such as fabrics that further more of a compliancy driven model. This being said, there would be a need for business and use cases to determine sustainability within that model, and subsequent configuration changes, if need be.
Ladies and Gentlemen, thank you so much for your time and consideration… I look forward to working with you all, in the near future. Please feel free to contact me, with any questions…