This document discusses several topics related to architecture over time. It provides a brief history of architectural developments from neolithic times through classical antiquity. It also discusses some of the key technological advances of these eras, such as the development of the wheel, lever, and adobe bricks. Additionally, the document discusses modern challenges in cloud design, data management, security issues, and architectural patterns. It emphasizes that architectures must evolve to continue serving their intended purposes.
The Learning Shared Services Center (LSSC) aims to deliver innovative learning solutions to enable the organization to become high-performing. Its mission is to provide performance consulting, learning management, content development, and management services. The goals of the LSSC are to support the corporate learning strategy and processes, set up and maintain learning and performance management, and ensure learning content quality and access. The LSSC manages learning as a business process through functions like operations, client management, vendor management, and service level agreements. It provides products like learning consulting services, content sourcing, and a content repository.
The document provides recommendations for top mobile apps in different categories including apps for RSS feeds, photos, music, video, maps, backup, organizing, and social media. It lists Google Reader, SmugMug, Twixtr, seeqpod, Last.fm, Videora, Google Maps, databinge, bitpim, Jott, Evernote, Facebook, Meebo, Twitter, Fring, and Brightkite as some of the best apps for news, photos, music, video, maps, backup, organizing, and social networking on mobile platforms. The document also provides credits for the sources of two slideshow presentations related to mobile design and marketing.
1) RCML is a pragmatic way to build domain-specific languages using Eclipse. It allows defining external DSLs using XML and JavaScript.
2) The document discusses using RCML to model complex workflows for a Middle Office Financial Platform, including portfolio management, position keeping, and document logistics.
3) RCML scripts are provided as examples to automate tasks like submitting and querying print jobs, as well as testing financial deals and assertions against portfolios.
This document analyzes the income statement and cost-volume-profit of Creative Ventures. It provides projections of revenue, expenses, and net income under different scenarios: status quo, expanded operations, reduced operations, and sale of the company. The analysis shows that under status quo and expanded operations, the company would have positive net income, while reduced operations and sale of the company would result in lower or negative net income. Projected figures are compared to actual past performance data.
The document provides a revenue analysis dashboard with key metrics including margins by month, contribution margins by month, revenue by month, cost of goods sold by month, top 10 customers, top 10 selling items, bottom 10 customers, and bottom 10 products. It also lists the top and bottom performing customers by revenue and top selling items by sales amount.
The document outlines an agenda for a Safe Harbor meeting with 6 presentations on topics related to software-defined networking (SDN). The schedule includes introductions from Dan Pitt of the Open Networking Foundation and David Meyer, followed by presentations from Jennifer Rexford on Frenetic and Mike Freedman on service-centric networking. The meeting will conclude with closing remarks and a social gathering. The document provides context on SDN and topics like OpenFlow, declarative network programming, and new network architectures.
This chart shows the sales amount, total product cost amount, and order quantity for 2002, with a big dip in sales in month 6 when a model of mountain bikes was phased out, causing product costs to exceed sales. It also validates that mountain bike model phasing out in month 6 and a new bike model starting in month 7 that sold nearly a thousand more units per month.
The Learning Shared Services Center (LSSC) aims to deliver innovative learning solutions to enable the organization to become high-performing. Its mission is to provide performance consulting, learning management, content development, and management services. The goals of the LSSC are to support the corporate learning strategy and processes, set up and maintain learning and performance management, and ensure learning content quality and access. The LSSC manages learning as a business process through functions like operations, client management, vendor management, and service level agreements. It provides products like learning consulting services, content sourcing, and a content repository.
The document provides recommendations for top mobile apps in different categories including apps for RSS feeds, photos, music, video, maps, backup, organizing, and social media. It lists Google Reader, SmugMug, Twixtr, seeqpod, Last.fm, Videora, Google Maps, databinge, bitpim, Jott, Evernote, Facebook, Meebo, Twitter, Fring, and Brightkite as some of the best apps for news, photos, music, video, maps, backup, organizing, and social networking on mobile platforms. The document also provides credits for the sources of two slideshow presentations related to mobile design and marketing.
1) RCML is a pragmatic way to build domain-specific languages using Eclipse. It allows defining external DSLs using XML and JavaScript.
2) The document discusses using RCML to model complex workflows for a Middle Office Financial Platform, including portfolio management, position keeping, and document logistics.
3) RCML scripts are provided as examples to automate tasks like submitting and querying print jobs, as well as testing financial deals and assertions against portfolios.
This document analyzes the income statement and cost-volume-profit of Creative Ventures. It provides projections of revenue, expenses, and net income under different scenarios: status quo, expanded operations, reduced operations, and sale of the company. The analysis shows that under status quo and expanded operations, the company would have positive net income, while reduced operations and sale of the company would result in lower or negative net income. Projected figures are compared to actual past performance data.
The document provides a revenue analysis dashboard with key metrics including margins by month, contribution margins by month, revenue by month, cost of goods sold by month, top 10 customers, top 10 selling items, bottom 10 customers, and bottom 10 products. It also lists the top and bottom performing customers by revenue and top selling items by sales amount.
The document outlines an agenda for a Safe Harbor meeting with 6 presentations on topics related to software-defined networking (SDN). The schedule includes introductions from Dan Pitt of the Open Networking Foundation and David Meyer, followed by presentations from Jennifer Rexford on Frenetic and Mike Freedman on service-centric networking. The meeting will conclude with closing remarks and a social gathering. The document provides context on SDN and topics like OpenFlow, declarative network programming, and new network architectures.
This chart shows the sales amount, total product cost amount, and order quantity for 2002, with a big dip in sales in month 6 when a model of mountain bikes was phased out, causing product costs to exceed sales. It also validates that mountain bike model phasing out in month 6 and a new bike model starting in month 7 that sold nearly a thousand more units per month.
Software is increasingly powering technological innovation and disrupting traditional industries as more aspects become software-defined. Mobile data usage continues rising rapidly, driven mainly by video. The internet economy is advancing towards a service-centric model. Software-defined networking aims to make networks more programmable through logical abstractions, enabling new capabilities like traffic engineering and dynamic configuration. However, distributed systems and concurrency present challenges to scaling that require ongoing research.
Here are the top 10 questions to ask when shopping for a mortgage: What is the interest rate and annual percentage rate? What fees are involved in getting this mortgage? What is the loan term in years? Will I have to pay points upfront for a lower interest rate? What kind of down payment is required? Is the interest rate fixed or adjustable? What kind of homeowner's insurance is required? Will the lender require private mortgage insurance? What kind of escrow account will be set up for taxes and insurance? What other costs are involved in getting this mortgage?
The current Internet architecture has multiple layers including an application layer, socket layer, IP address layer, MAC address layer, and network layers. At the lowest levels, a MAC address and IP address both point to the same physical location, while higher level protocols like sockets and services use the IP address along with additional identifiers like service IDs and socket IDs to enable communication between applications running on different host devices on a network.
The document discusses various financial ratios used to analyze different aspects of a business's performance. It covers ratios related to liquidity, investment/shareholders, gearing, profitability, and financial metrics. Specific ratios discussed include the current ratio, acid test ratio, earnings per share, price earnings ratio, gearing ratio, gross profit margin, net profit margin, return on capital employed, asset turnover, stock turnover, and debtor days.
The document summarizes key aspects of ancient Chinese history, including important dynasties like the Qin, Han, Tang, and Song. It discusses the Chinese social hierarchy, family structures, the teachings of Confucius, geography, economy, trade routes like the Silk Road, important inventions and technologies, the writing system, architecture, and daily life. The dynasties established centralized governments and brought periods of stability and prosperity to China, while the teachings of Confucius emphasized family and social harmony.
Steve's presentation at ICCC 2009(Stephen Mc Intyre)Wladimir Illescas
This document discusses criticisms of claims that the 1990s were the warmest decade and 1998 the warmest year of the millennium based on temperature reconstructions. It notes that minor variations in data versions and proxies can yield opposite results. It also discusses criticisms of the "hockey stick" temperature graph that was featured prominently in IPCC reports and disputes that multiple independent studies all found late 20th century warming, noting many used common proxies. The document questions whether key proxies like bristlecones have been robustly updated and whether simple statistical models apply to complex trees.
The document provides instructions to review credit reports every 4 months for mistakes or fraud, contact credit reporting agencies directly to address errors, and fix credit issues yourself when possible. It also lists the addresses and websites for Experian, TransUnion, and Equifax so consumers can access their credit reports and histories from the three major credit bureaus.
This document is a homebuyer quiz that tests knowledge about various aspects of the homebuying process. It contains 20 multiple choice questions about topics like budgeting for homeownership, determining what you can afford, components of the monthly mortgage payment, contingencies in an offer to purchase, types of loans and their characteristics, closing costs, title insurance, contacting the lender if financial difficulties arise, and what is/is not covered by homeowners insurance. The quiz is followed by the email address of the author to share results.
In this video from ChefConf 2014 in San Francisco, Cycle Computing CEO Jason Stowe outlines the biggest challenge facing us today, Climate Change, and suggests how Cloud HPC can help find a solution, including ideas around Climate Engineering, and Renewable Energy.
"As proof points, Jason uses three use cases from Cycle Computing customers, including from companies like HGST (a Western Digital Company), Aerospace Corporation, Novartis, and the University of Southern California. It’s clear that with these new tools that leverage both Cloud Computing, and HPC – the power of Cloud HPC enables researchers, and designers to ask the right questions, to help them find better answers, faster. This all delivers a more powerful future, and means to solving these really difficult problems."
Watch the video presentation: http://insidehpc.com/2014/09/video-hpc-cluster-computing-64-156000-cores/
Asynchronous futures: Digital technologies at the time of the AnthropoceneAlexandre Monnin
1) The document discusses the future of digital technologies and their relationship to physical resources and sustainability in the context of the Anthropocene.
2) It notes that while Moore's Law has led to exponential growth in computing power, this has come at tremendous resource and energy costs that may not be sustainable long-term as technologies approach physical limits.
3) The document questions where research may lead in the future and considers more sustainable alternatives like biomimetics, new architectures, and alternative materials if current trajectories prove unsustainable in light of physical and resource constraints.
As a Presidio Fellow in Sustainability and Sports, at the Presidio Graduate School, San Francisco, CA, [http://www.presidio.edu/academics/presidiopro/certificates/sports- sustainability] I presented a class on energy efficiency and solar in sports stadiums and arenas. It covers related issues of advanced BIM (Building Information Modeling or Building Intelligence Management), Internet of Everything (IoT), continuous commissioning over building lifecycle, LED lighting systems, and more.
This document provides an overview and roadmap for achieving broadband optical access of 10Gb/s everywhere. It discusses:
1) The TSB Photonics21-NGOIA project which aims to identify promising approaches to achieving ubiquitous 10Gb/s access.
2) A paradigm shift in optical networking towards more flexible, dynamically reconfigurable networks to improve energy efficiency.
3) The concept of an "ultimate" optical network architecture with a common infrastructure across access, metro and backbone networks to maximize statistical multiplexing gains and reduce costs.
4) Several candidate technologies for next-generation optical access such as long-reach PON, WDM-PON and hybrid TDM/W
Better Information Faster: Programming the ContinuumIan Foster
This document discusses the computing continuum and efforts to enable better information faster through computation. It provides examples of how techniques like executing tasks closer to data sources or on specialized hardware can significantly accelerate applications. Programming models and managed services are explored for specifying and executing workloads across diverse infrastructure. There are still open questions around optimizing networks, algorithms, and applications for the computing continuum.
Building Reactive Fast Data & the Data Lake with Akka, Kafka, SparkTodd Fritz
In this session, we will discuss:
* reactive architecture tenets
* distributed “fast data” streams
* application and analytics focused Data Lake
Enterprise level concerns and the importance of holistic governance, operational management, and a Metadata Lake will be conceptually investigated. The next level of detail will be to explore what a prospective architecture looks like at scale with Terabytes of ingestion per day, how scale puts pressure on an architecture, and how to be successful without losing data in a mission critical system via resilient, self-healing, scalable technologies. DevOps and application architecture concerns will be first-class themes throughout.
Reactive principles and technology will be the second act of this talk. Kafka. Akka. Spark. Various streaming technologies (Kafka Streams, Akka Streams, Spark Streaming) will be reviewed to identify what they are best suited for. The fast data pipeline discussion will center around Kafka, Akka, and Apache Flink (Lightbend Fast Data platform). We’ll also walk through an exciting addition to the Akka family, Alpakka, which is a Camel equivalent for Enterprise Integration Patterns.
The final act will be to dive into the Data Lake, from both an analytics and application development perspective. Technologies used to explain concepts will include Amazon and Hadoop. A Data Lake may service multiple analytics consumers with various “views” (and access levels) of data. It may also be a participant of various applications, perhaps by acting as a centralized source for reference data or common middleware (in turn feeding the analytics aspect). The concept of the Metadata Lake to apply structure, meaning and purpose will be an over-arching success factor for a Data Lake. The difference between the Data Lake and Metadata Lake is conceptually similar to a Halocline… Various technologies (Iglu/Snowplow and more) will be discussed from a feature standpoint to flesh out the technology capabilities needed for Data Lake governance.
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...BigDataEverywhere
The document discusses the history and future of high performance computing (HPC). It outlines the key technologies and architectures that have enabled exponential increases in computational power over recent decades. These include vector processing, parallelization, GPUs, and interconnects like Infiniband. The document also examines emerging technologies like exascale computing and quantum computing that could further push the boundaries of HPC. Overall, the document argues that HPC will remain indispensable for scientific discovery and engineering innovation into the future.
The document discusses the concepts of smart and adaptive architecture. It begins by defining smart architecture as buildings that can adapt to changing needs over time through the use of technology. Next, it explores the history of materials in architecture and how modern materials like steel and glass allowed for new structural possibilities. The document then discusses how computer technologies now allow the use of smart materials whose properties can actively change. It provides examples of passive and active building systems that allow architecture to respond to its environment. In conclusion, transformable structures and smart facades are presented as ways to create architecture that can adapt in real-time through the use of kinetics and responsive designs.
Blockchain general presentation nov 2017 v engDavid Vangulick
These slides are used to introduce the concept of blockchain and how this technology can be used for peer to peer energy exchange linked with the wholesale energy market
OpenStack & the Evolving Cloud EcosystemMark Voelker
OpenStack has come a long way since 2010. What started as a collaboration on compute and storage between NASA and Rackspace has changed dramatically and grown into a large, successful open source project that meets the needs of thousands of organizations. But OpenStack hasn’t evolved in a vacuum over the past seven years: the technology landscape around it has been changing as well. Join VMware’s chief OpenStack architect and longtime community member Mark Voelker for a look at the new technology landscape around OpenStack, how we got here, and where we might go next. We’ll discuss how what started as an IaaS platform ending up being a winning platform for Network Functions Virtualization and telco applications, how OpenStack came to be selected as a common underpinning for container orchestration systems like Kubernetes, how OpenStack governance influenced other open source communities, and how OpenStack changed the way companies looked at Open Source. We’ll consider the role IaaS might play in a future that includes options like functions-as-a-service, containers, and the internet of things. We’ll consider OpenStack as a common foundation for a variety of new technologies, and discuss OpenStack’s lasting impact in the cloud ecosystem. We’ll also discuss how OpenStack is changing and adapting to shifts in the technology landscape, both as an open source community and in terms of product offerings. Learn about new interoperability programs targeted at use cases that didn’t exist seven years ago, and new initiatives from the OpenStack technical community and Foundation.
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...zionsaint
John Hennessy gave a talk outlining the end of Moore's law and faster general purpose computing, and opportunities for a new golden age. He discussed how three key changes - the end of Dennard scaling, slowing Moore's law transistor gains, and architectural limitations - have converged to end the steady performance increases of the past. This marks the end of an era of stunning microprocessor progress. Domain specific architectures and languages that better match applications to tailored hardware designs provide new opportunities for more efficient computing. Research into cheaper hardware development, new technologies, and the co-evolution of domains, languages and architectures could enable a new golden age.
This document provides an overview of the history and development of nanotechnology. It discusses key early developments like Richard Feynman's 1959 talk envisioning atomic engineering. It also covers the invention of the scanning tunneling microscope in 1981 and the discovery of buckyballs in 1985. The document then discusses the growth of the nanotechnology industry and funding increases over time. It provides examples of potential applications of nanotechnology and how properties become size-dependent at the nanoscale. Finally, it defines integrated circuits and microelectromechanical systems to provide context around miniaturization.
Advantages to Industrial Physics and Digital Portals in Developing Green Technology and Remote Building, increasing Industrial Scale and Reclaiming Legacy with Advance Science
The document discusses how the parallel computing revolution is only half over. It summarizes the past eras of computing including pre-electronic, vacuum tubes, transistors, VLSI-Moore's law, and parallel supercomputers. It notes that Ivan Sutherland predicted in 1977 that the VLSI revolution was only half over. After 1990, commodity microprocessors took over high performance computing. Recent developments in AI have created a new demand for specialized computing architectures. Startups are now unveiling new deep learning processors using wafer-scale integration and other innovations to accelerate AI training.
Software is increasingly powering technological innovation and disrupting traditional industries as more aspects become software-defined. Mobile data usage continues rising rapidly, driven mainly by video. The internet economy is advancing towards a service-centric model. Software-defined networking aims to make networks more programmable through logical abstractions, enabling new capabilities like traffic engineering and dynamic configuration. However, distributed systems and concurrency present challenges to scaling that require ongoing research.
Here are the top 10 questions to ask when shopping for a mortgage: What is the interest rate and annual percentage rate? What fees are involved in getting this mortgage? What is the loan term in years? Will I have to pay points upfront for a lower interest rate? What kind of down payment is required? Is the interest rate fixed or adjustable? What kind of homeowner's insurance is required? Will the lender require private mortgage insurance? What kind of escrow account will be set up for taxes and insurance? What other costs are involved in getting this mortgage?
The current Internet architecture has multiple layers including an application layer, socket layer, IP address layer, MAC address layer, and network layers. At the lowest levels, a MAC address and IP address both point to the same physical location, while higher level protocols like sockets and services use the IP address along with additional identifiers like service IDs and socket IDs to enable communication between applications running on different host devices on a network.
The document discusses various financial ratios used to analyze different aspects of a business's performance. It covers ratios related to liquidity, investment/shareholders, gearing, profitability, and financial metrics. Specific ratios discussed include the current ratio, acid test ratio, earnings per share, price earnings ratio, gearing ratio, gross profit margin, net profit margin, return on capital employed, asset turnover, stock turnover, and debtor days.
The document summarizes key aspects of ancient Chinese history, including important dynasties like the Qin, Han, Tang, and Song. It discusses the Chinese social hierarchy, family structures, the teachings of Confucius, geography, economy, trade routes like the Silk Road, important inventions and technologies, the writing system, architecture, and daily life. The dynasties established centralized governments and brought periods of stability and prosperity to China, while the teachings of Confucius emphasized family and social harmony.
Steve's presentation at ICCC 2009(Stephen Mc Intyre)Wladimir Illescas
This document discusses criticisms of claims that the 1990s were the warmest decade and 1998 the warmest year of the millennium based on temperature reconstructions. It notes that minor variations in data versions and proxies can yield opposite results. It also discusses criticisms of the "hockey stick" temperature graph that was featured prominently in IPCC reports and disputes that multiple independent studies all found late 20th century warming, noting many used common proxies. The document questions whether key proxies like bristlecones have been robustly updated and whether simple statistical models apply to complex trees.
The document provides instructions to review credit reports every 4 months for mistakes or fraud, contact credit reporting agencies directly to address errors, and fix credit issues yourself when possible. It also lists the addresses and websites for Experian, TransUnion, and Equifax so consumers can access their credit reports and histories from the three major credit bureaus.
This document is a homebuyer quiz that tests knowledge about various aspects of the homebuying process. It contains 20 multiple choice questions about topics like budgeting for homeownership, determining what you can afford, components of the monthly mortgage payment, contingencies in an offer to purchase, types of loans and their characteristics, closing costs, title insurance, contacting the lender if financial difficulties arise, and what is/is not covered by homeowners insurance. The quiz is followed by the email address of the author to share results.
In this video from ChefConf 2014 in San Francisco, Cycle Computing CEO Jason Stowe outlines the biggest challenge facing us today, Climate Change, and suggests how Cloud HPC can help find a solution, including ideas around Climate Engineering, and Renewable Energy.
"As proof points, Jason uses three use cases from Cycle Computing customers, including from companies like HGST (a Western Digital Company), Aerospace Corporation, Novartis, and the University of Southern California. It’s clear that with these new tools that leverage both Cloud Computing, and HPC – the power of Cloud HPC enables researchers, and designers to ask the right questions, to help them find better answers, faster. This all delivers a more powerful future, and means to solving these really difficult problems."
Watch the video presentation: http://insidehpc.com/2014/09/video-hpc-cluster-computing-64-156000-cores/
Asynchronous futures: Digital technologies at the time of the AnthropoceneAlexandre Monnin
1) The document discusses the future of digital technologies and their relationship to physical resources and sustainability in the context of the Anthropocene.
2) It notes that while Moore's Law has led to exponential growth in computing power, this has come at tremendous resource and energy costs that may not be sustainable long-term as technologies approach physical limits.
3) The document questions where research may lead in the future and considers more sustainable alternatives like biomimetics, new architectures, and alternative materials if current trajectories prove unsustainable in light of physical and resource constraints.
As a Presidio Fellow in Sustainability and Sports, at the Presidio Graduate School, San Francisco, CA, [http://www.presidio.edu/academics/presidiopro/certificates/sports- sustainability] I presented a class on energy efficiency and solar in sports stadiums and arenas. It covers related issues of advanced BIM (Building Information Modeling or Building Intelligence Management), Internet of Everything (IoT), continuous commissioning over building lifecycle, LED lighting systems, and more.
This document provides an overview and roadmap for achieving broadband optical access of 10Gb/s everywhere. It discusses:
1) The TSB Photonics21-NGOIA project which aims to identify promising approaches to achieving ubiquitous 10Gb/s access.
2) A paradigm shift in optical networking towards more flexible, dynamically reconfigurable networks to improve energy efficiency.
3) The concept of an "ultimate" optical network architecture with a common infrastructure across access, metro and backbone networks to maximize statistical multiplexing gains and reduce costs.
4) Several candidate technologies for next-generation optical access such as long-reach PON, WDM-PON and hybrid TDM/W
Better Information Faster: Programming the ContinuumIan Foster
This document discusses the computing continuum and efforts to enable better information faster through computation. It provides examples of how techniques like executing tasks closer to data sources or on specialized hardware can significantly accelerate applications. Programming models and managed services are explored for specifying and executing workloads across diverse infrastructure. There are still open questions around optimizing networks, algorithms, and applications for the computing continuum.
Building Reactive Fast Data & the Data Lake with Akka, Kafka, SparkTodd Fritz
In this session, we will discuss:
* reactive architecture tenets
* distributed “fast data” streams
* application and analytics focused Data Lake
Enterprise level concerns and the importance of holistic governance, operational management, and a Metadata Lake will be conceptually investigated. The next level of detail will be to explore what a prospective architecture looks like at scale with Terabytes of ingestion per day, how scale puts pressure on an architecture, and how to be successful without losing data in a mission critical system via resilient, self-healing, scalable technologies. DevOps and application architecture concerns will be first-class themes throughout.
Reactive principles and technology will be the second act of this talk. Kafka. Akka. Spark. Various streaming technologies (Kafka Streams, Akka Streams, Spark Streaming) will be reviewed to identify what they are best suited for. The fast data pipeline discussion will center around Kafka, Akka, and Apache Flink (Lightbend Fast Data platform). We’ll also walk through an exciting addition to the Akka family, Alpakka, which is a Camel equivalent for Enterprise Integration Patterns.
The final act will be to dive into the Data Lake, from both an analytics and application development perspective. Technologies used to explain concepts will include Amazon and Hadoop. A Data Lake may service multiple analytics consumers with various “views” (and access levels) of data. It may also be a participant of various applications, perhaps by acting as a centralized source for reference data or common middleware (in turn feeding the analytics aspect). The concept of the Metadata Lake to apply structure, meaning and purpose will be an over-arching success factor for a Data Lake. The difference between the Data Lake and Metadata Lake is conceptually similar to a Halocline… Various technologies (Iglu/Snowplow and more) will be discussed from a feature standpoint to flesh out the technology capabilities needed for Data Lake governance.
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...BigDataEverywhere
The document discusses the history and future of high performance computing (HPC). It outlines the key technologies and architectures that have enabled exponential increases in computational power over recent decades. These include vector processing, parallelization, GPUs, and interconnects like Infiniband. The document also examines emerging technologies like exascale computing and quantum computing that could further push the boundaries of HPC. Overall, the document argues that HPC will remain indispensable for scientific discovery and engineering innovation into the future.
The document discusses the concepts of smart and adaptive architecture. It begins by defining smart architecture as buildings that can adapt to changing needs over time through the use of technology. Next, it explores the history of materials in architecture and how modern materials like steel and glass allowed for new structural possibilities. The document then discusses how computer technologies now allow the use of smart materials whose properties can actively change. It provides examples of passive and active building systems that allow architecture to respond to its environment. In conclusion, transformable structures and smart facades are presented as ways to create architecture that can adapt in real-time through the use of kinetics and responsive designs.
Blockchain general presentation nov 2017 v engDavid Vangulick
These slides are used to introduce the concept of blockchain and how this technology can be used for peer to peer energy exchange linked with the wholesale energy market
OpenStack & the Evolving Cloud EcosystemMark Voelker
OpenStack has come a long way since 2010. What started as a collaboration on compute and storage between NASA and Rackspace has changed dramatically and grown into a large, successful open source project that meets the needs of thousands of organizations. But OpenStack hasn’t evolved in a vacuum over the past seven years: the technology landscape around it has been changing as well. Join VMware’s chief OpenStack architect and longtime community member Mark Voelker for a look at the new technology landscape around OpenStack, how we got here, and where we might go next. We’ll discuss how what started as an IaaS platform ending up being a winning platform for Network Functions Virtualization and telco applications, how OpenStack came to be selected as a common underpinning for container orchestration systems like Kubernetes, how OpenStack governance influenced other open source communities, and how OpenStack changed the way companies looked at Open Source. We’ll consider the role IaaS might play in a future that includes options like functions-as-a-service, containers, and the internet of things. We’ll consider OpenStack as a common foundation for a variety of new technologies, and discuss OpenStack’s lasting impact in the cloud ecosystem. We’ll also discuss how OpenStack is changing and adapting to shifts in the technology landscape, both as an open source community and in terms of product offerings. Learn about new interoperability programs targeted at use cases that didn’t exist seven years ago, and new initiatives from the OpenStack technical community and Foundation.
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...zionsaint
John Hennessy gave a talk outlining the end of Moore's law and faster general purpose computing, and opportunities for a new golden age. He discussed how three key changes - the end of Dennard scaling, slowing Moore's law transistor gains, and architectural limitations - have converged to end the steady performance increases of the past. This marks the end of an era of stunning microprocessor progress. Domain specific architectures and languages that better match applications to tailored hardware designs provide new opportunities for more efficient computing. Research into cheaper hardware development, new technologies, and the co-evolution of domains, languages and architectures could enable a new golden age.
This document provides an overview of the history and development of nanotechnology. It discusses key early developments like Richard Feynman's 1959 talk envisioning atomic engineering. It also covers the invention of the scanning tunneling microscope in 1981 and the discovery of buckyballs in 1985. The document then discusses the growth of the nanotechnology industry and funding increases over time. It provides examples of potential applications of nanotechnology and how properties become size-dependent at the nanoscale. Finally, it defines integrated circuits and microelectromechanical systems to provide context around miniaturization.
Advantages to Industrial Physics and Digital Portals in Developing Green Technology and Remote Building, increasing Industrial Scale and Reclaiming Legacy with Advance Science
The document discusses how the parallel computing revolution is only half over. It summarizes the past eras of computing including pre-electronic, vacuum tubes, transistors, VLSI-Moore's law, and parallel supercomputers. It notes that Ivan Sutherland predicted in 1977 that the VLSI revolution was only half over. After 1990, commodity microprocessors took over high performance computing. Recent developments in AI have created a new demand for specialized computing architectures. Startups are now unveiling new deep learning processors using wafer-scale integration and other innovations to accelerate AI training.
Chaos engineering open science for software engineering - kube con north am...Sylvain Hellegouarch
This document discusses chaos engineering and the need for more reliable systems. It begins with examples of past engineering failures from NASA space missions. It then discusses the emergence of chaos engineering practices and the formation of a CNCF working group to develop standards. The document outlines deliverables for the working group, including a whitepaper and landscape of chaos engineering tools. It argues that chaos engineering should be viewed as an open science for exploring reliability. It proposes initiatives like the Open Chaos Initiative to share experiments and findings across organizations to improve reliability through collective learning.
This document summarizes quantum computing. It begins with an introduction explaining the differences between classical and quantum bits, with qubits being able to exist in superpositions of states. The history of quantum computing is discussed, including early explorations in the 1970s-80s and Peter Shor's breakthrough in 1994. D-Wave Systems is mentioned as the first company to develop a quantum computer in 2011. The scope, architecture, working principles, advantages and applications of quantum computing are then outlined at a high level. The document concludes by discussing the growing field of quantum computing research and applications.
CIF16: Unikernels: The Past, the Present, the Future ( Russell Pavlicek, Xen ...The Linux Foundation
The document summarizes a Cloud Innovators Forum event focused on unikernels. It provides an agenda for the day-long event, which includes several presentations on past, present and future applications of unikernels from various organizations. Unikernels are described as providing a thin, fast alternative to virtual machines by including only the minimal components needed to perform a task, improving security, performance and efficiency. The document discusses how unikernels have evolved from proof-of-concepts to more mainstream applications and languages, with potential to enable transient microservices and high density cloud deployments.
Advantages to Industrial Physics and Digital Portals in Developing Green Technology and Remote Building, increasing Industrial Scale V1 Reclaiming Legacy with Advance Science
Micro-Architectural Attacks on Cyber-Physical SystemsHeechul Yun
Micro-architectural attacks are specialized software attacks that target hardware. Modern high-performance computing hardware employs a variety of sophisticated microarchitectural components---multiple levels of caches, prefetchers, out-of-order speculative execution engine, etc.---to improve performance. Micro-architectural attacks target weaknesses in these microarchitectural components and many kinds of successful attacks---which leak secret, alter data, or delay execution times of the victim---have been demonstrated in recent years. As safety-critical cyber-physical systems (CPS) are increasingly relying on high-performance hardware, micro-architectural attacks on CPS are becoming a serious threat to their safety and security. In this talk, I will present examples of micro-architectural attacks in the context of CPS and discuss the challenges and potential approaches to defend against these attacks.
This document provides an overview of cloud computing from the perspective of UC Berkeley's Reliable Adaptive Distributed Systems (RAD) Lab. It discusses what is new about cloud computing, challenges and opportunities, considerations for moving workloads to the cloud, and the RAD Lab's experiences using public and private clouds. Some key points:
- Cloud computing enables on-demand access to computing resources without long-term commitments, allowing for flexible scaling up or down. This transfers risk from users to providers.
- Challenges include lock-in, availability, data transfer bottlenecks, and policy issues. Opportunities include risk transfer enabling new scenarios, standardization, and pay-as-you-go licensing models.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
1. Gary Berger
Technical Leader, Engineering
Office of the CTO, DSSG
Biggest Problems in Cloud Design Today
Source: http://visibleearth.nasa.gov
2. Internet being dominated by
real-time entertainment
Source: Sandia, 2010 Global Internet Phenomena Report
3. What is an Architect?
IMHOTEP
DOCTOR, ARCHITECT, HIGH PRIEST, SCRIBE
AND VIZIER TO KING DJOSER
“An architect does not arrive at his finished
product solely by a sequence of
rationalizations, like a scientist, or through the
workings of the Zeitgeist. Nor does he reach
them by uninhibited intuition, like a musician
or painter. He thinks of forms intuitively, and
then tries to justify them rationally. Peter
Collins 1966
“Good architecture has been seen largely as
either working within a context or
circumventing it, depending on which
principles are adopted and where the cutting
edge is perceived.” Theory of Architecture,
Paul-Alan Johnson, 1994
6. Why is Architecture hard
to understand?
“Whereof one cannot speak, one must pass over in silence.”
Wittgenstein
7. Tacit Knowledge
(Informal Knowledge)
• Knowledge that is difficult to
transfer to another person by
means of writing it down or
verbalizing it.
• Knowledge which cannot be
codified, but can only be
transmitted via training or
gained through personal
experience.
• Inherent “know-how” -- as
opposed to “know-what”
(facts), “know-why”
(science), or “know-who”
(networking). It involves
learning and skill but not in a
way that can be written
down.
Source http://en.wikipedia.org/wiki/Tacit_knowledge adapted from 'The Tacit Dimension, philosopher-chemist Michael
Polanyi
W.T. Wallington walks a 21,600lb
stone
9. "Knowledge as the
Competitive Resource”
• "Knowledge is not just another resource alongside the
traditional factors of production --labor, capital and
land- but the only meaningful resource today” -
[Drucker, 1993]
• “Knowledge is the source of the highest quality power
and is the key to the power-shift that lies ahead.
knowledge is not merely an adjunct of money power
and muscle power but eventually will be the ultimate
replacement of other resource” -[Toffler, 1990]
• “The economic and producing power of a modern
corporation lies more in its intellectual and service
capabilities than in its hard assets such as land, plant
and equipment - [Quinn, 1992]
12. Awesome Ladder!
Von Neuman
Architecture John Von Neumann
INPUT
/
OUTPUT
INSTRUCTION
&
DATA
MEMORY
ALU
REGISTERS
CONTROL
CPU
CONTROL & ADDRESS
DATA & INSTRUCTION BUS
1903-1957
14. Independent
Compute POD
Data Network
Unified I/O 10GE
Data Snooping/Migration
Capacity Scaling
Block Store
Data Center Blueprint
I/O Scaling
POD Services Tier
Client Access Tier
HTTP
Compute/Data Grid
15. Things we are going to
talk about
• Dealing with Scalability
• Dealing with Data
• Dealing with Security
18. What is Scalability?
Mechanical and Biological systems all have limits
Scaling Factors
• All systems reach a limit
relative to their size.
• Understanding where
these limitations arise
gives us a clue where
to look for
performance
bottlenecks
• Architects typically find
limitations through trial
and error.
• Concurrency = The
interaction between
processors
• Contention = The degree
of serialization on shared
writeable data
• Coherency = Penalty
incurred for maintaining
consistency of shared
writable data
21. Scalability Can Be
Measured
Guerrilla Capacity Planning, Gunther, 2007
Universal Scalability Law
• C(p) = scaleup|scaleout
• p = number of processors
• a = serialized
fraction(contention)
• k = coherency k>=0
• Scalability is not infinite but a
concave function
We are making an
assumption here that we
have an exponentially
distributed load and service
rate (i.e. a Poisson
Distribution)
22. Why Scale-Up is Important
Beyond Wimpy Cores
Max Capacity p*
Asymptotic Maximum
ceiling
Coherency starts to dominate
k
Amdahl k=0
23. Conclusion
We Need Models Moore’s Impact[1]
• Effectively modeling some of
these characteristics are top of
mind problems for current
application architects
• Eric Brewers CAP Theorem
challenges architects to deal
with latency as a proxy for strong
consistency..
• Much work going on in
understanding these problems
and building a balance between
availability and consistency (i.e.
adaptive consistency)
• Some patterns make it difficult to
model mathematically
• Technologist’s Moore’s Law
o Double Transistors per Chip every 2
years
o Slows or stops: TBD
• Microarchitect’s Moore’s Law
o Double Performance per Core
every 2 years
o Slowed or stopped: Early 2000s
Multicore’s Moore’s
Law
o Double cores per chip every 2 years
• Double Parallelism per
Workload every 2 years
o Aided by Architectural
Support for Parallelism
o Double Performance per Chip
every 2 years
Or GAME OVER?1. Amdahl’s Law in the Multicore Era, Hill, Marty, Wisconsin Multifacet Project
26. Data Management
Data management is the development, execution
and supervision of plans, policies, programs and
practices that control, protect, deliver and
enhance the value of data and information assets
What are the two most important commands in the
data center today?
(NFS Read/Write)
Source: Data Management International, dama.org
27. Data Management
Models Practices
• Request level parallelism
• Data level parallelism
• Persistence model
• Durable, Volatile,
Transient
• Caching Eviction Policies
• Synchronous/Asynchrono
us Updates
• Denormalization of data
• Caching Trees
o Anti-cache spoilers
• Distributed Hash Tables
(NOSQL)
o Key/value
o Column
o Document
o Graph
• Messaging and
Serialization(IPC)
o Lightweight interfaces (PB, Thrift,
HC)
• Distributed transactions
o Opportunistic locking
o Vector Clocks
o Paxos protocols
28. Jason McHugh, Principal Engineer, Amazon
Flash Crowds
Demand spike on singular resource
• 69.6 seconds receive
31K requests for a single
object
• Cache spoilers
• Cache trees and
coherency protocol
built into relax
consistency to protect
availability
32. The “Illusion” of Security
• Perimeter defense seals
off data center so
attack surface moves
to the client
• Attackers find path of
least resistance
o Email Addresses
o Social Websites
o Standard naming practices )i.e.
firtname.lastname@company.c
om
The Apple I,
Recently sold for $210,000
“Simply keeping out bad code is not sufficient to keep out bad
computation” Stefan Savage, UC San Diego
33. Modern Attacks
Easy to 0wn, Normal processing leads to code execution
Mitigation Strategies
• Memory Trespass
• Rogue AV through mass mailings
• Injection Flaws (SQL, OS, LDAP)
• Cross Site Scripting
• Broken Authentication and
Session Management
• Insecure Direct Object
References
• Cross-site Request Forgery
Summary
• Normal processing leads to code
execution
o Receive packet/request
o Parse display/data
• ASLR (Address Space
Layout Randomization)
• DEP (Data Execution
Prevention)
• Stack Cookies
• Sandboxing
• Need to understand
strategy more than
tactics
Examples
34. Source: Dino A. Dai Zovi, Memory Corruption, Exploitation and You
Workstation Attack
Surface
35. Zero Day Attacks
• The price of disclosure?
• There are 1419 Researchers working at ZDI?
• ZDI can be used to launch a new Aurora attack
37. Architectural Ladders
3000 BCE 300 CE
Neolithic Architecture
Sumerian Architecture
Ancient Egyptian
Architecture
Classical Architecture
38. Architecture
• Architecture is created to express
some intent but is not the purpose
itself, therefore architecture must
serve a purpose
• Architectures must evolve or die,
sometimes at the expense of the
intent and function
• Architectures can be rediscovered,
refactored and reused for a new
purpose or function
• Architectures may not realize their
full potential
• Architectures do not replace
fundamentals in engineering and
science but establish a pattern
from which to describe its
effectiveness
Foote, Yoder, 1999, The Big Ball of Mud
ZIGGURAT: Dubai’s Carbon Neutral Pyramid
Will House 1 Million
39. Conclusion
• Some of the problems today have been recognized over a
decade ago but lacked the economic justifications for
change
• History repeating as we move to refactoring architectures of
the past “Engineered Solutions” just at different scales
• New architectures being proposed based on empirical
evidence, prototyping and experimentation, others just a
horrible guess
• Architects need to quickly establish new patterns with the goal
of pushing the bottlenecks to the least cost contributor (i.e.
Energy Proportional Computing).
• Architecture should help us to describe intent of the product
or function not merely as a generalization
• Architectures today are agile
• Architecture for efficient computing which maximizes
processing power per joule of energy.
40. Uggh.. Predictions?
• By 2012, 20 percent of businesses will own no IT assets
• By 2012, India-centric IT services companies will represent 20
percent of the leading cloud aggregators in the market (through
cloud service offerings)
• By 2012, Facebook will become the hub for social network
integration and Web socialization
• By 2013, mobile phones will overtake PCs as the most common
Web access device worldwide
• By 2014, most IT business cases will include carbon remediation
cost
• By 2014, over 3 billion of the world's adult population will be able
to transact electronically via mobile or Internet technology
• By 2015, context will be as influential to mobile consumer services
and relationships as search engines are to the Web
• By 2016, all Global 2000 companies will use public cloud services.
44. Meta Structures to scale
Service Directory MetaDataMetaData
MetaData MetaData
MetaData MetaData
Content Content
Content
ContentContent
Content
45. Persistency
pNFS RFC5661 HoneyComb 2
• Parallel Opens by file
handle
• Asynchronous
notification on lock
availability
• Commands linearized
in slot table
• Support for File, Object
and Block targets
• Automated data management
• Extreme data mobility
• Ability to run 3rd party storage apps
• Highly Reliable with self healing
• Flat name space
• Single management entity
• Multi‐cell architecture
• Programmatic APIs
• Immutable
• Automatic load balancing
• Transparent node upgrades
• Meta‐data support
• Storage apps support
• Deferred maintenance model
• Open‐Source Software only
46. Clustered Scalability
Guerrilla Capacity Planning, Gunther, 2007
Universal Scalability Law
• C(p) = intranode scalability
• n = nodes
• p,n = processors/node
• az = global internode contention
• kz = global internode coherency
QuestionsHow many people have been in a Data Center at any point in their career?How many people have been in a data center in the last year?How many people have been part of the construction, staging and turnup of a data center in their career?How many people have in the last year?
Streamed or buffered audio and video (RTSP, RTP, RTMP, Flash), peercasting (PPStream, Octoshape), placeshifting (Slingbox, home media servers)
Architect must distill patterns to find a common way of testing for rational justification
Architected over 100s of years..Scale evolved over several generationsPurpose and intent left to interpretation but believe this was a place to bury highly important people in the culture. May have been the architects themselves3000BC Dug a ditch a bank and a ring of 56 pits Aubrey Holes under the chalk to possible hold bluestones from wales500 years later sarsen stones were but up and bluestones were movedAvenue to River.Many generations, abandoning one form and moving to another. http://www.independent.co.uk/life-style/history/syrias-stonehenge-neolithic-stone-circles-alignments-and-possible-tombs-discovered-1914047.They didn’t have much scaling problems here, lots of mathematics and astrological knowledge (moon and sun trajectorySome, the "bluestones", weighed four tons each and were brought a distance of 150 miles from Pembrokeshire, Wales.http://video.pbs.org/video/1636852466/Ended with the introduction of copper and gold, Personal wealth lead to individual burials
http://www.independent.co.uk/life-style/history/syrias-stonehenge-neolithic-stone-circles-alignments-and-possible-tombs-discovered-1914047.htmlBCE Before Common EraThey were excellent at dealing with wood and stone
http://www.theforgottentechnology.com/newpage1
Rosetta Stone amongs other things is a public notice part of which says ”with regard to the priests, that they should pay no more as the tax for admission to the priesthood than what was appointed them throughout his father's reign and until the first year of his own reign; and has relieved the members of the priestly orders from the yearly journey to Alexandria;” basically relinquishing the priests from paying taxes.
Next Slide into Neolithic ArchitectureWe are going to dabble in some ancient architectures and see how they can be related..
This is the low level
MESIF (Modify, Exclusive, Shared, Invalid, Forward)CAP (Consistency, Availability, Data Partionining)REST (Representational State Transfer Service)pNFS (NFSV4.1DHT(Distributed Hash Table)NOSQL (Not Only SQL)DSL(Domain Specific Language)ORM(Object Relational Mapper)PCM(Phase Change Memory)TSV(Through Silicon Via)
This is the high level
Scale goes from simple structures to whole cities..http://en.wikipedia.org/wiki/Pyramid_of_DjoserArchitect IMHOTEPInvention of writing at 3100BCEThe Sumerians were the first society to create the city itself as a built form
http://en.wikipedia.org/wiki/Pyramid_of_DjoserArchitect IMHOTEPAppears in late Neolithic
Also Gunther..Our focus is to model Poisson arrival rates and service times even though Ethernet exhibits some self-similar behavior (i.e. LRD)Contention (i.e. Spinlock, row lock, etc..)Coherency=Consistency“The problem of characterizing Internet traffic is not one that can be solved easily, once and for all. As the Internet increases in size and the technologies connected to it change, we must constantly monitor and reevaluate our assumptions to ensure that our conceptual models correctly represent reality.”[1]
Serialized ContentionHyperthreading (SMT) SpinlocksMutex Field of study around lockless algosAs parallel process increase the serialized contention becomes the prominent dependencyWhile there are other ways of modeling data what is important to recognize is the fact that a completely Poisson model is what allows us to balance out the loadThe more self-similar or LRD the more problematic it becomes to model behavior. Ethernet actually exhibits LRD behavior on the output, how much of this will cause bad architectural strategies.Like the Conservation of Mass you have the Conservation of Bottlenecks. Bottlenecks are created nor destroyed they simply move from one point to anotherWhy should we pay attention to these models? Any architecture which is not based on these simple mathematics will have a difficult time being modeled correctly and thus capacity planning will be completely ineffective.People always place the burden on the application to deal with bottlenecks but there are only so many implementations which allow for a significant change. For instance the use of GPU for Victimization. This is the classical “Speed=up” model which we can reduce execution time by adding more SIMD capable computation engines. As opposed to scale-up which allows for application demand to grow while keeping the serialized overhead the same (I,,e same service rate) in order to protect customer expectations of serfvice level.Other ModelsGeometric ModelQuadratic ModelExponential ModelThink if a and k as state and federal taxes
GuntherCoherency overheadTwo variables are sigma (serlized contention) and kappa which is the coherency (consistency) overhead,Brawny cores still beat wimpy cores, most of the time, UrsHölzle GoogleSoftware development costs often dominate a company’s overall technical expenses, so forcing programmers to parallelize more code can cost more than we’d save on the hardware side
http://www.reshafim.org.il/ad/egypt/building/The drawings on the left were found by the French at the quarries of Gebel Abu Feida in 1789. These pillar capitals, destined for a temple at Denderah being built by Cleopatra, were sketched with red ochre on the rock face in half the natural size.http://www.reshafim.org.il/ad/egypt/building/The drawings on the left were found by the French at the quarries of Gebel Abu Feida in 1789. These pillar capitals, destined for a temple at Denderah being built by Cleopatra, were sketched with red ochre on the rock face in half the natural size.http://en.wikipedia.org/wiki/File:Ancient_Egypt_rope_manufacture.jpgList of Inventions in Ancient EgyptBlack InkFirst Ox-Drawn Plows365 Day Calendar and Leap YearPaperFirst Triangular Shaped PyramidsOrganized laborHieroglyphics as an early system of writingSails
NFSV1 file striping
Number of elements to a set (find largest match)
http://en.wikipedia.org/wiki/Pyramid_of_Djoser
ancient mechanical computer[1][2] designed to calculate astronomical positions.The device, they say, is technically more complex than any known device for at least a millennium afterwards.The text is astronomical with many numbers that could be related to planetary motions, and the gears are a mechanical representation of a second century theory that explained the irregularities of the Moon's motion across the sky caused by its elliptical orbit.
“Memory trespass vulnerabilities are software weaknesses that allow memory accesses outside of the semantics of the programming language in which the software was written.”Fuzzing attacks are used to exploit unknown application behaviors which can be used to create an exploit.
We can see what they were able to accomplish but don’t know how or why. The architecture remains and can be studied even though it has no use today.Different Scales, moving towards defined purposes, burial ground but for individuals with great wealthMonuments for the group to monuments for the most wealthy and powerfulEach architecture develops to solve a purpose and than maybe discarded or refactored for other purposes.
It wasn’t the attempt of the ancient architects to define their architectural period, it is for us to analyze the history of design and how its patterns change.The pharaohs wanted to do something “godlike” like live forever…It was the architect who had to figure out a way of explaining it even though it required massive engineering skill.Maybe IMTOs intent was to build the pyramid and got a buyer for it..
Architected over 100s of years..Scale evolved over thousands of years smaller stones to bigger stones.Many iterations, many stages. 500 years after bluestones the Sarsen stones appeared.3000BC Dug a ditch a bank and a ring of 56 pits Aubrey Holes under the chalk to hold bluestones, the "bluestones", weighed four tons each and were brought a distance of 150 miles from Pembrokeshire, Wales.http://video.pbs.org/video/1636852466/Many generations, abandoning one form and moving to another. http://www.independent.co.uk/life-style/history/syrias-stonehenge-neolithic-stone-circles-alignments-and-possible-tombs-discovered-1914047.They didn’t have much scaling problems here, lots of mathematics and astrological knowledge (moon and sun trajectory)Ended with the introduction of copper and gold, Personal wealth lead to individual burials
Otber ModelsGeometric ModelQuadratic ModelExponential ModelThink if a and k as state and federal taxes
Cardinality: Measure of the number of elements to a set (find largest match)