Many “big data” software systems are not interactive, automated, or run in a real-time mode. The true utility of cloud computing and “big data systems” can be increased by providing an execution framework and control software that is native to cloud architectures and supports interactivity and time synchronization. In addition, a framework to integrate different artificial intelligence and machine learning algorithms is combined with the execution framework to create a powerful cloud computing system development platform.
Trustworthy Computational Science: A Multi-decade PerspectiveVon Welch
Trust is critical to the process of science. Two decades ago the Internet and World Wide Web fostered a new age in computational science with the emergence of accessible and high performance computing, storage, software, and networking. More recent paradigms, including virtual organizations, federated identity, big data, and global-scale operations continue to evolve the way computing for science is performed.
Advancing technologies, the need to coordinate across organizations and nations, and an evolving threat landscape are sources of ongoing challenges in maintaining the trustworthy nature of computational infrastructure and the science it supports. To address these challenges, a number of projects have focused on improving the cybersecurity and trustworthiness of scientific computing. Recent examples include the Center for Trustworthy Scientific Cyberinfrastructure funded by NSF, the Software Assurance Marketplace funded by DHS, and the Extreme Scale Identity Management for Science project funded by DOE.
This presentation will give a 20 year retrospective together with a vision for the future of cybersecurity for computational science. It will describe the state of trust and cybersecurity for scientific computing, its evolution over the past twenty years, challenges it is facing today, how the exemplar projects are addressing those challenges, and a vision of cybersecurity for research and higher education in general augmenting each other in the future.
The Industrial Internet is an emerging communication infrastructure that connects people, data, and machines to enable access and control of mechanical devices in unprecedented ways. It connects machines embedded with sensors and sophisticated software to other machines (and end users) to extract data, make sense of it, and find meaning where it did not exist before. Machines--from jet engines to gas turbines to medical scanners--connected via the Industrial Internet have the analytical intelligence to self-diagnose and self-correct, so they can deliver the right information to the right people at the right time (and in real-time).
Despite the promise of the Industrial Internet, however, supporting the end-to-end quality-of-service (QoS) requirements is hard. This talk will discuss a number of technical issues emerging in this context, including:
Precise auto-scaling of resources with a system-wide focus.
Flexible optimization algorithms to balance real-time constraints with cost and other goals.
Improved fault-tolerance fail-over to support real-time requirements.
Data provisioning and load balancing algorithms that rely on physical properties of computations.
It will also explore how the OMG Data Distribution Service (DDS) provides key building blocks needed to create a dependable and elastic software infrastructure for the Industrial Internet.
Accumulo Summit 2014: Addressing big data challenges through innovative archi...Accumulo Summit
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. The Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) is not immune to these challenges and has developed a set of tools that address many of these challenges.
Big data volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, there are four multi-billion dollar ecosystems that dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, conferences, and business markets. The broad nature of business big data challenges make it unlikely that one cloud ecosystem can meet its needs and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT Supercloud was developed to address this challenge. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.
The velocity of big data velocity stresses the rate at which data can be absorbed and meaningful answers produced. Led by the NSA, a Common Big Data Architecture (CBDA) was developed for the U.S. government based on the Google Big Table NoSQL approach and is now in wide use. MIT/LL played a leading role in developing the CBDA and is a leader in adapting the CBDA to a variety of big data challenges.
Big data variety may present the largest challenge and greatest opportunities. The promise of big data is the ability to correlate diverse and heterogeneous data to form new insights. The centerpiece of the CBDA is the NSA developed Apache Accumulo database (capable of millions of entries/second) and the MIT/LL developed D4M schema. These technologies allow vast quantities of highly diverse data (text, computer logs, and social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of every element.
The talk will concentrate on how we utilize the aforementioned technologies in our mission to apply advanced technology to problems of national security.
Testing and Development Challenges for Complex Cyber-Physical Systems: Insigh...Sebastiano Panichella
Keynote presentation </b>at ICST (AIST workshop) entitled "Testing and Development Challenges for Complex Cyber-Physical Systems: Insights from the COSMOS H2020 Project"
Trustworthy Computational Science: A Multi-decade PerspectiveVon Welch
Trust is critical to the process of science. Two decades ago the Internet and World Wide Web fostered a new age in computational science with the emergence of accessible and high performance computing, storage, software, and networking. More recent paradigms, including virtual organizations, federated identity, big data, and global-scale operations continue to evolve the way computing for science is performed.
Advancing technologies, the need to coordinate across organizations and nations, and an evolving threat landscape are sources of ongoing challenges in maintaining the trustworthy nature of computational infrastructure and the science it supports. To address these challenges, a number of projects have focused on improving the cybersecurity and trustworthiness of scientific computing. Recent examples include the Center for Trustworthy Scientific Cyberinfrastructure funded by NSF, the Software Assurance Marketplace funded by DHS, and the Extreme Scale Identity Management for Science project funded by DOE.
This presentation will give a 20 year retrospective together with a vision for the future of cybersecurity for computational science. It will describe the state of trust and cybersecurity for scientific computing, its evolution over the past twenty years, challenges it is facing today, how the exemplar projects are addressing those challenges, and a vision of cybersecurity for research and higher education in general augmenting each other in the future.
The Industrial Internet is an emerging communication infrastructure that connects people, data, and machines to enable access and control of mechanical devices in unprecedented ways. It connects machines embedded with sensors and sophisticated software to other machines (and end users) to extract data, make sense of it, and find meaning where it did not exist before. Machines--from jet engines to gas turbines to medical scanners--connected via the Industrial Internet have the analytical intelligence to self-diagnose and self-correct, so they can deliver the right information to the right people at the right time (and in real-time).
Despite the promise of the Industrial Internet, however, supporting the end-to-end quality-of-service (QoS) requirements is hard. This talk will discuss a number of technical issues emerging in this context, including:
Precise auto-scaling of resources with a system-wide focus.
Flexible optimization algorithms to balance real-time constraints with cost and other goals.
Improved fault-tolerance fail-over to support real-time requirements.
Data provisioning and load balancing algorithms that rely on physical properties of computations.
It will also explore how the OMG Data Distribution Service (DDS) provides key building blocks needed to create a dependable and elastic software infrastructure for the Industrial Internet.
Accumulo Summit 2014: Addressing big data challenges through innovative archi...Accumulo Summit
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. The Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) is not immune to these challenges and has developed a set of tools that address many of these challenges.
Big data volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, there are four multi-billion dollar ecosystems that dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, conferences, and business markets. The broad nature of business big data challenges make it unlikely that one cloud ecosystem can meet its needs and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT Supercloud was developed to address this challenge. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.
The velocity of big data velocity stresses the rate at which data can be absorbed and meaningful answers produced. Led by the NSA, a Common Big Data Architecture (CBDA) was developed for the U.S. government based on the Google Big Table NoSQL approach and is now in wide use. MIT/LL played a leading role in developing the CBDA and is a leader in adapting the CBDA to a variety of big data challenges.
Big data variety may present the largest challenge and greatest opportunities. The promise of big data is the ability to correlate diverse and heterogeneous data to form new insights. The centerpiece of the CBDA is the NSA developed Apache Accumulo database (capable of millions of entries/second) and the MIT/LL developed D4M schema. These technologies allow vast quantities of highly diverse data (text, computer logs, and social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of every element.
The talk will concentrate on how we utilize the aforementioned technologies in our mission to apply advanced technology to problems of national security.
Testing and Development Challenges for Complex Cyber-Physical Systems: Insigh...Sebastiano Panichella
Keynote presentation </b>at ICST (AIST workshop) entitled "Testing and Development Challenges for Complex Cyber-Physical Systems: Insights from the COSMOS H2020 Project"
Concept of edge computing is to leverage new generation technologies, processes, services, and applications that are built to take an advantage of new infrastructure.
Put processing closer to the edge of the network pre-process data and send to the cloud.
This presentation goes through a higher level overview of understanding cyber resilience, important concepts, the difference between cybersecurity and cyber resilience, and frameworks aimed at achieving or assessing an organizations cyber resilience.
SDN/NFV is following the same path Linux and the Internet did...
Mentioned during the Open Networking Summit 2014
Santa Clara March 4th
Re-engineering Engineering
Vinod Khosla
Kleiner Perkins Caufield & Byers
vkhosla@kpcb.com
Sept 2000
Grid Computing - Collection of computer resources from multiple locationsDibyadip Das
Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files.
Dr. Jeff Daniels and Dr. Ben Amaba discuss performance without limits 2.0 with a special focus on emerging research in industrial control systems, cybersecurity, cloud computing, and the pervasiveness of mobile devices.
This talk was given at the Winspear Opera House in Dallas, Texas.
This presentation shares his insights on design simulation and the new role of the simulator in Simulator Assisted Engineering. For more information, go to GSES.com, email info@gses.com, or call 800-638-7912. You can also follow GSE at @GSESystems and Facebook.com/GSESystems.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
Integrating Mobile Technology in the Construction IndustryAppear
Mobile technology has tremendous potential to transform the way civil engineering and construction is delivered – throughout the lifecycle from planning through execution to decommissioning. This webinar illustrates how the EU MobiCloud project is helping to deliver on this potential using a novel IT infrastructure for developing and managing construction orientated mobile applications
4 TeraGrid Sites Have Focal Points:
SDSC – The Data Place
Large-scale and high-performance data analysis/handling
Every Cluster Node is Directly Attached to SAN
NCSA – The Compute Place
Large-scale, Large Flops computation
Argonne – The Viz place
Scalable Viz walls
Caltech – The Applications place
Data and flops for applications – Especially some of the GriPhyN Apps
Specific machine configurations reflect this
This talk was given at a workshop entitled "Cybersecurity Engagement in a Research Environment" at Rady School of Management at UCSD. The workshop was organized by Michael Corn, the UCSD CISO. It tries to provoke discussion around the cybersecurity features and requirements of international science collaborations, as well as more generally, federated cyberinfrastructure systems.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
More Related Content
Similar to Cloud Computing and Intelligent Systems: Two Fields at a Crossroads
Concept of edge computing is to leverage new generation technologies, processes, services, and applications that are built to take an advantage of new infrastructure.
Put processing closer to the edge of the network pre-process data and send to the cloud.
This presentation goes through a higher level overview of understanding cyber resilience, important concepts, the difference between cybersecurity and cyber resilience, and frameworks aimed at achieving or assessing an organizations cyber resilience.
SDN/NFV is following the same path Linux and the Internet did...
Mentioned during the Open Networking Summit 2014
Santa Clara March 4th
Re-engineering Engineering
Vinod Khosla
Kleiner Perkins Caufield & Byers
vkhosla@kpcb.com
Sept 2000
Grid Computing - Collection of computer resources from multiple locationsDibyadip Das
Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files.
Dr. Jeff Daniels and Dr. Ben Amaba discuss performance without limits 2.0 with a special focus on emerging research in industrial control systems, cybersecurity, cloud computing, and the pervasiveness of mobile devices.
This talk was given at the Winspear Opera House in Dallas, Texas.
This presentation shares his insights on design simulation and the new role of the simulator in Simulator Assisted Engineering. For more information, go to GSES.com, email info@gses.com, or call 800-638-7912. You can also follow GSE at @GSESystems and Facebook.com/GSESystems.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
Integrating Mobile Technology in the Construction IndustryAppear
Mobile technology has tremendous potential to transform the way civil engineering and construction is delivered – throughout the lifecycle from planning through execution to decommissioning. This webinar illustrates how the EU MobiCloud project is helping to deliver on this potential using a novel IT infrastructure for developing and managing construction orientated mobile applications
4 TeraGrid Sites Have Focal Points:
SDSC – The Data Place
Large-scale and high-performance data analysis/handling
Every Cluster Node is Directly Attached to SAN
NCSA – The Compute Place
Large-scale, Large Flops computation
Argonne – The Viz place
Scalable Viz walls
Caltech – The Applications place
Data and flops for applications – Especially some of the GriPhyN Apps
Specific machine configurations reflect this
This talk was given at a workshop entitled "Cybersecurity Engagement in a Research Environment" at Rady School of Management at UCSD. The workshop was organized by Michael Corn, the UCSD CISO. It tries to provoke discussion around the cybersecurity features and requirements of international science collaborations, as well as more generally, federated cyberinfrastructure systems.
Similar to Cloud Computing and Intelligent Systems: Two Fields at a Crossroads (20)
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Cloud Computing and Intelligent Systems: Two Fields at a Crossroads
1. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Cloud Computing and
Intelligent Systems
Two Fields at a Crossroads
Dr. Jeffrey Wallace
The 2014 International Conference on Computational
Science and Computational Intelligence (CSCI'14)
2. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Cloud Computing and Intelligent
Systems
• Context
• Technical Challenges
• Examples
– Unmanned Systems Control
• Anti-ship Missile Defense
• Battlefield Extraction of Wounded
• Technology Enablers
– System/Component Integration
– Algorithm/Environment Integration
• Summary 2
3. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Context
• Unique Systems Engineering and Development
Capability
– A distillation of over $2B in government R&D
– Addresses both people and technology
• Partnered with the Office of the Secretary of
Defense, the Joint Chiefs of Staff, the 4 services,
Academia, and others
• Solved world-class/grand challenge problems on
F-35, CVN-21, healthcare IT, etc.
• Systems Integrator for the Continuous
Transformation Environment
3
5. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
CTE at a Glance
• CTE Objectives
– Prototype development and experimentation: Innovation in a
collaborative environment by industry, government, and
academia in an open systems collaborative environment
– Integration, verification, test, and release of Commercial-Off-The-
Shelf (COTS) products for government use
– Rapidly prove operational utility of high technology solutions
– Open systems and standards compliance evaluation,
documentation, and capabilities matrix
• Solve GAO identified big integrator problem1
– Organizing principle: “Give innovation a chance.”
• Consortium of large and small technology companies
and facilities partners
5
1 Government Accounting Office-09-326SP, http://www.gao.gov/new.items/d09326sp.pdf
6. Infinite Dimensions Solving Tomorrow’s Problems Today
CTE Network
6
Quality Technology Services (QTS) and Verizon (VZW) are the current facilities
OCONUS Sites are VZW
CONUS VZW: Engelwood, CO; Culpeper, VA, Miami, FL
Remaining CONUS Sites are QTS, in particular the 1.3M sq. ft. Richmond, VA s
7. Infinite Dimensions Solving Tomorrow’s Problems Today
Main CTE Experimentation Lab Location
Richmond- High Density Multi-Data Center
Campus
500,000 Sq Ft of Planned Raised Floor
Multiple Distinct Data Center Buildings
(Current Basis of Design)
1
3
2 Office Space
1.3 Million Sq Ft Campus
8. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Technical Challenges
• Rapidly create complex, realistic, and scalable
networks of systems and component inter-
relationships
• Distribution of autonomous controls and monitors
• Implementation of complex webs of cause and effect
• Dynamic alteration of the component execution
structure
– Adaptation and evolution of the system
• Ability to handle billions of active processes in real-
time
– Harness power of sequential, distributed and/or parallel
processing – optimizing the use of any
compute/network/storage configuration
– Smartphones to supercomputers
8
9. Infinite Dimensions Solving Tomorrow’s Problems Today
Unmanned Systems Control
October 27, 2008 USIC Conference, San Diego,
CA
9
10. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Hypersonic Anti-Ship Missile
Challenges
• Critical Battlespace defense gap
• Interactions happening faster than humans
can react
• Current Command and Control (C2) is not
real-time, performance limited, and the
only source of information
• Greatly affects real-time response for
kinetic engagement capabilities
10
11. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Anti-Ship Missile
Defense
Increase Response time for Fleet
Simplify Installation, Maintenance, &
Operation
SBX
DDG
e.g. DF-21
UAS
Weapon
USV
Anti-
Ship
Missile
Space
Sensor
12. Infinite Dimensions Solving Tomorrow’s Problems Today
12
Sea Base
Perimeter
ASW Threat
Identification
Perimeter
SUW Threat
Monitoring
Perimeter
SUW Threat
Identification
Perimeter
18 nmi
ASMD Threat
Perimeter
13. Infinite Dimensions Solving Tomorrow’s Problems Today
13
Sea Base
Perimeter
ASW Threat
Identification
Perimeter
SUW Threat
Monitoring
Perimeter
SUW Threat
Identification
Perimeter
ASMD Threat Perimeter
14. Infinite Dimensions Solving Tomorrow’s Problems Today
Cloud of Projectiles
Each USV’s Gun Creates a Cloud
14
15. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
New Fleet Capabilities
• Automated Fleet Systems
– Adaptive & composable
– Knowledge built into systems
– Emphasizes speed and flexibility
• Ability to fuse new information, knowledge,
and structures rapidly
– Mesh networks and systems
– Example: Sensor net that automatically refocuses
based on accurate real time fused information
• C2 and fire control moves from Fleet
platforms to Network
15
16. Infinite Dimensions Solving Tomorrow’s Problems Today
Inter-system Interoperability and Interaction with Personnel “On
Scene” Example: Nightingale II
16
Autonomous transit from starting point,
to pick-up point, to medical unit
4
A. Call for
MedEvac
received at
Nightingale
Control
B. Best UAV is
chosen
automatically
C. Route is
autonomously
planned &
uploaded
D. UAV is
launched
automatically
1
2
Autonomous collision &
obstacle avoidance
Similar process for: Logistics, Combat Rescue, & Special Ops
No Fly
Zone
C2
No Fly
Zone
A
B
C
D
1
Autonomous
Clear-Zone
landing
3
5
UGV+BEAR
deploys
BEAR
deploys
6 BEAR
recovers &
Medic treats
7
UAS/UGV/BEAR
system rejoins and
goes to destination
17. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Nightingale II
• Autonomous:
– VTOL UAV
– High-mobility UGV(s)
– Asset allocation and mission planning
• Interoperability / coordination with existing
– Dismounted ground personnel
– Air operations
– Ground operations
– Artillery & strike operations
– Political & no-fly boundaries
– High-data-rate non-LOS communications
October 27, 2008USIC Conference, San Diego, CA 1
7
18. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Nightingale II
Challenges
• Autonomous VTOL UAV
– Autonomous obstacle avoidance
• Wires, antennae, etc.
– Sensor with sufficient resolution & range for
vehicle maneuverability limits
– Day, night, weather, dust/sand/dirt (“brown
out”)
– Autonomous collision avoidance
(other aircraft)
• Small UAVs, birds, etc.
– Sensor with sufficient resolution & range for
vehicle maneuverability limits
– Day, night, weather, dust/sand/dirt (“brown
out”)
October 27, 2008 USIC Conference, San Diego, CA 1
8
19. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Nightingale II
Challenges
• Autonomous VTOL UAV (cont)
– Autonomous LZ identification
• LZ size, geometry, roughness, slope
• Ingress/egress flight-path
– Obstacles
– Moving ground personnel, vehicles
– Exposure to enemy
» Lines of fire
» Exposure time
• Coordination with UGV mobility constraints
– Navigable path between LZ and casualty
October 27, 2008 USIC Conference, San Diego, CA 1
9
20. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
2
0
Nightingale II
Challenges
• Rough-terrain UGV(s)
– Mobility
• Tracks, wheels, legs
• Water crossing
• Mud, sand, snow, ice
• Interiors (stairs, doors, elevators, etc.)
– Autonomous capabilities
• Beyond Grand Challenge, Urban Challenge
– Casualty extraction
• Careful casualty handling
• Does the UGV need a UGV?
October 27, 2008 USIC Conference, San Diego, CA
21. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
System/Component Integration
• Most well-known methods and technology
originate in “business IT”…
• Optimizing execution and efficiency and
enabling computation at scale is typically
the province of “high performance
computing”
• Real-time execution and synchronization
are addressed by several communities,
e.g. robotics
21
23. Infinite Dimensions Solving Tomorrow’s Problems Today
Data
Source 1
Application 1
Compute
Process 1
Compute
Process M
Data
Source 2
Data
Source N
Application X
WAIT_FOR (DS1 semaphore, wait time1)
WAIT_FOR (DSP semaphore, wait timep)
Typical Complex Application-
System
24. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
The DNA of Complex Systems
• Look at nature to understand
complex systems
• Internal Processes
• External Processes
• Internal Events
• External Events
• Intermix of all four is required
–Implementing in a scalable manner is key
25. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Internal Processes
Analogy: The Heart Beat
• Atria pump blood to
ventricles, which
contract
• Nonstop contractions
are driven by the heart's
electrical system
Internal Process: Synchronous or Asynchronous –
Intrincsic Capabilities
26. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
External Processes
Analogy: Pacemaker
• External process monitors
and interacts with an
object (i.e., a pacemaker
monitors the heart’s
rhythm)
• The electric current makes
the heart beat within a
certain range
External Process: Synchronous or Asynchronous –
Monitor and Control
27. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Internal Events
Analogy: Heart Attack
• Internal occurrence without
pre-established time scale
• Certain factors cause the
occurrence. Blood flow is
restricted, or the nerve
system, which controls the
heart, malfunctions
Internal Occurrence: Irregular Time Scale – Intrinsic
Capabilities
28. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
External Events
Analogy: Defibrillation
• External event changes a
passive object’s state (i.e., a
defibrillator is used for
resuscitation)
• External electrical shock is
applied to the heart
• Foundational representation
method
External Occurrence: Irregular Time Scale – Monitor and
Control
29. Infinite Dimensions Solving Tomorrow’s Problems Today
FE FE
FE FE
FE
FE
FE
FE
FE FE
FE
FE FE FE FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
FE FE
User Application
Interface Services
Hardware Device
Interface Services
3D Visualization
Interface Services
Web Interface
Interface Services
Communication
Speed
Slow
Medium
Fast
Configurable Computational Mesh
Polymorphic Computing Architecture (PCA)
FE = Functional Element
29
30. Infinite Dimensions Solving Tomorrow’s Problems Today
Component Repository
Composability Automation
CASE Tool Environment
User Defined
IT System
Interface
User Defined
Hardware
Interface
Web Services
API
(JNI, SOAP, OWL, etc.)
Shared
Memory
IP JTRS
Reflective
Memory
Security State Saving Core Programming
Distributed
Object Mgmt
Std App Dev
Interface
Synchronization
Management
Event Management
Services
Knowledge
Representation
Integration Meta-Data
Data
Translation
Communication Services (Unicast, Multicast, Broadcast)
Common Application Services
Intelligent Application Services
System Execution Services
Service Decomposition
CompressionEncryption
BLOSLink-16 Others
30
31. Infinite Dimensions Solving Tomorrow’s Problems Today
API Example
Turret/Fire Control
Slew Elevate
Fire When Slew and Elevate are Complete
Process Firing Commands (and Queuing Them)
31
32. Infinite Dimensions Solving Tomorrow’s Problems Today
Example: Turret Fire Command
void Turret::fire()
{
P_VAR
P_BEGIN(2)
// Wait until the turret movement is completed
WAIT_FOR(1, slewComplete, -1);
WAIT_FOR(2, elevateComplete, -1);
// Fire the weapon, this would activate the real gun
Fire_M256();
RB_cout << "Flash, Boom, Bang, Echo" << endl;
fireComplete = 1;
P_END
}
32
33. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Algorithm/Environment
Integration
• Must be able to integrate arbitrary types of
models (e.g. rule-based, network-based,
fuzzy, combinations, etc.)
• A system must account for functionally
disparate phenomena, in order to
represent “intelligence” more effectively,
for example:
– Recognition of patterns
– Adapting new solutions or strategies
– Rule following
34. Infinite Dimensions Solving Tomorrow’s Problems Today
Adaptive, Dynamic, Knowledge-based
Execution/Control
Conceptual Graphs
Computational Ontology Framework
ConceptsConcepts ActorsActorsRelationshipsRelationships
CAT STAT LOCSIT MATCAT STAT LOCSIT MATMAT
A Cat sits on a matA Cat sits on a mat Une Chat assis sur
une matte
Une Chat assis sur
une matte
35. Infinite Dimensions Solving Tomorrow’s Problems Today
Basic CG Formation Rules
Concept 1
Relationship 1
Relationship 1
Concept 2
Concepts or Other Actors
Actor 1
Actors or Relationships
Actors or Relationships
Concept 1
Concept
Concept or other Actors
Concept 1
Relationship 1
Actor 1
INVALID
36. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Extensions
• A CG execution engine with abilities to:
– Embed in a hardware controller or a software
program
– Associate the software and hardware components as
indicated by the CG structure
– Control, execute, and integrate the entire system
• Customization of concepts, relationships, and
actors by the integrator, providing capability to:
– Accommodate new hardware, or software
components, without returning to original developers
for new versions of graph systems
37. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Extensions
• A mechanism to control the CG execution
engine
– Each node maintains a truth value to
managed graph execution
– A concept from belief network theory
• Collectively, the extensions embodied in
the system enable intelligent automation,
execution, and control of complex
hardware and software assemblies
38. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Summary of Extensions
• A collection of user-defined concept nodes
• A collection of user-defined relationship nodes
• A collection of user-defined actor nodes
• A unique ID numbering scheme to identify every node, regardless of
type
• A description of the connected nodes, and route of the connection
• A list of references to input concept nodes (those with no incoming
arcs)
– A valid CG must contain at least one input concept node
• A list of references to the output concept nodes (those with no
outgoing arcs)
– A valid CG must contain at least one output concept node
• A data structure that records the truth value of each node
39. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Paradigm of Execution
1. The system is started
a. Each component in the system is initialized
b. Synchronization relationships are established
c. Inputs are read and loaded into system components
2. The CG associated with the system is initialized and parsed
a. Each system component is associated with a CG element using the unique ID
tag (concept, relationship, actor)
b. Each component is registered in the correct collection mechanism for each type,
using the unique ID tag
3. The system begins operating, with various evolution methods. For
example:
a. A user inputs information, which changes the state of the system (either event-
based, process-based, or simple update loop)
b. Regular update cycles occur for various system aspects
4. The system operation mechanism executes, and an Execution Cycle
operation for the CG is activated
5. Each time a system operation mechanism is activated, step 4 is repeated
40. Infinite Dimensions Solving Tomorrow’s Problems Today
CG Execution
Dividend: 9.0
Divisor: 4.0
Number: 144.0
Quotient: 2.0
Remainder: 1.0
SquareRoot: 12.0
Sum: 14.0
divide plus
sqrt
Dividend: 9
Divisor: 4
Number: 144
Quotient: 2
Remainder: 1
SquareRoot: 12
Sum: *s
divide plusplus
sqrt
IF ?r = 0IF ?r = 0
T
TT
T
TT
TT
T
F
41. Infinite Dimensions Solving Tomorrow’s Problems Today
Example of a Path of Route
in a CG
runs
Is a
generating
for
Responsible
for
of Producing
A
Machine
An
Interpreter
Algorithm
An
Interpretation
An Agent
A Graph A Behavior
42. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Summary
• Representation of complex webs of
synchronized causes and effects is central
to the implementation of complex systems
• Computation, correlation of simultaneously
evolving systems and interrelated
phenomena
• Ability to control an activity based on a
web of logic, and start another in response
to dynamic conditions
• Achievement of scalability without loss of
capability 42
44. Infinite Dimensions Solving Tomorrow’s Problems TodayCSCI'14
Software Development Framework: JEE
• Minimizes amount of code, people, skill-level,
development, test, integration, and time
required
• Standardizes processes, templates, and
documentation
• Works with all modern languages and
standard Application Programming Interfaces
(APIs)
• Does not compromise any other software
vendor’s intellectual property – typically a key
cost, schedule, and performance driver
• Can be taught to all development team
members in an extremely short period of time
(< 1 to 2 days)
11
45. Infinite Dimensions Solving Tomorrow’s Problems Today
Example: Building Automation
Management System
Level 1
Information
management
system
Level 2
Information processing
And supervisory system
Level 3
Information processing/
Automation system
Level 4
Process field
46. Infinite Dimensions Solving Tomorrow’s Problems Today
Components in a BAMS
fan
Air conditioning
PC
Heat energy
sanitary
lighting
Waste-
managementelectricalacoustics
video
safety
elevator
Emergency power
supply
transformer
heating
Building
700
Editor's Notes
The Aegis combat system talks to the Standard Missile a pitiful 4 times a second to update guidance and telemetry information.
The focus of over half a decade of war in Southwest Asia has driven innovation, investment, and focus away from defense against high-technology, state actor funded weapons systems, such as the newly rumored DF-21D – reputed to be a “carrier-killer”.Reliably countering anti-ship missiles – ballistic or otherwise – requires a degree of speed and coordination between system components that is beyond today’s technology. Integrating the components also requires a great deal of time and money – rapidly integrating new systems into the networks requires months of time and testting.The new solution finally makes the marketing slogan “plug-n-play” a reality.Adding a new sensor or system to the network is now a matter of installing and configuring a board or network appliance.