The document proposes an architecture for improving collaboration between staff and volunteers on Wikimedia projects. It involves setting up a test/development OpenStack cluster that is a clone of the production environment. This would allow volunteers liberal access to test proposed changes without affecting production. The architecture aims to provide privilege escalation for non-ops users and an environment for testing major changes. Key aspects include using OpenStack, LDAP, Puppet, PowerDNS, Gerrit and CloudInit to manage the test/dev cluster and integrate tools like MediaWiki, Nova and DNS.
Terraform is a tool for building, changing, and version controlling infrastructure safely and efficiently. It allows users to define and provision resources through code rather than manual configuration. Key concepts include variables, outputs, providers that interface with cloud platforms, reusable modules, and managing infrastructure through commands like plan, apply, and destroy. The presentation demonstrated Terraform's capabilities with a live demo and provided additional resources for learning more.
This document discusses Cloud Foundry, an open platform as a service (PaaS). It begins with introductions of the author Andy Piper and his role as a Cloud Foundry developer advocate. It then discusses why an open cloud platform is important, defining Cloud Foundry and its key characteristics like being open source and deployable on various clouds. It covers Java support on Cloud Foundry including buildpacks and how various Java applications and frameworks are detected and run. It emphasizes the flexibility and portability Cloud Foundry provides for Java applications.
OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
In this talk we look at the challenges of taking docker and using it as the basis for a cloud platform. We highlight the work done by one of our own Cloudsoft engineers Andrea Turli who has contributed an Apache jclouds provider for Docker and integrated this with the open source project Brooklyn.
Andrea has written about this in a recent blog post AMP for Docker and so using this as a starting point we show how we are building on this to create a lightweight dynamic Docker cloud and compare and contrast this with work we are doing with our latest partner Waratek to help them create a similar lightweight dynamic Java cloud using their Java application container technology.
Removing Environmental Differences - Simon PearsonOutlyer
Is Docker the answer to the Stack x Platform x Cloud explosion thats engulfed the Enterprise?
Is IaaS really the right model? or just the one that worked?
Has PaaS’s time finally arrived.
Can Docker make OS, library, stack choices irrelevant to hosting and Ops?
This talk looks at how Pearson is investigating and moving towards Docker, what we’ve learned so far, and what you can learn from our experiences.
The document discusses building a Raspberry Pi Kubernetes cluster to run OpenFaas serverless functions. Some key points are:
1. A Raspberry Pi cluster can provide cloud-like capabilities at home by pooling hardware resources and allowing elastic scaling.
2. Kubernetes provides declarative deployments, configuration, service discovery, high availability, and elastic capacity for containers.
3. OpenFaas is a serverless framework that uses Docker containers and Kubernetes to build and run functions as a service.
Keynote by Diane Bryant, SVP and GM of the Data Center Group at Intel, at OpenStack Silicon Valley 2015.
Cloud computing provides tremendous agility and efficiency to organizations are the driver of the digital service economy. In her keynote, Diane Bryant will discuss how Intel was an early leader in adoption of cloud computing under her tenure as CIO and how this experience has shaped broader strategy to deliver tens of thousands of new clouds across the enterprise with Intel’s new Cloud for All Initiative. Attendees can expect to learn about OpenStack’s critical role in shaping the future of the enterprise data center and learn more about key industry efforts to drive enterprise readiness to the OpenStack platform.
Defining & Enforcing Policies the GitOps WayWeaveworks
GitOps is a great way to reliably and securely deploy both the infrastructure and applications in the context of Kubernetes. In this talk we will have a look at how we can use CNCF Open Policy Agent (OPA) to define and enforce policies along the entire supply chain. For example, an OPA Rego-based bot can review Git commits and automatically provide feedback, and in the runtime space the Gatekeeper project can be of great value.
Link to YouTube Video of this talk: https://youtu.be/Xe0PDeENMoE
Speaker: Michael Hausenblas, Developer Advocate, AWS
Bio: Michael is a Developer Advocate at AWS, part of the container service team, focusing on container security. Michael shares his experience around cloud native infrastructure and apps through demos, blog posts, books, and public speaking engagements as well as contributes to open source software. Before AWS, Michael worked at Red Hat, Mesosphere, MapR and in two research institutions in Ireland and Austria.
Terraform is a tool for building, changing, and version controlling infrastructure safely and efficiently. It allows users to define and provision resources through code rather than manual configuration. Key concepts include variables, outputs, providers that interface with cloud platforms, reusable modules, and managing infrastructure through commands like plan, apply, and destroy. The presentation demonstrated Terraform's capabilities with a live demo and provided additional resources for learning more.
This document discusses Cloud Foundry, an open platform as a service (PaaS). It begins with introductions of the author Andy Piper and his role as a Cloud Foundry developer advocate. It then discusses why an open cloud platform is important, defining Cloud Foundry and its key characteristics like being open source and deployable on various clouds. It covers Java support on Cloud Foundry including buildpacks and how various Java applications and frameworks are detected and run. It emphasizes the flexibility and portability Cloud Foundry provides for Java applications.
OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.
In this talk we look at the challenges of taking docker and using it as the basis for a cloud platform. We highlight the work done by one of our own Cloudsoft engineers Andrea Turli who has contributed an Apache jclouds provider for Docker and integrated this with the open source project Brooklyn.
Andrea has written about this in a recent blog post AMP for Docker and so using this as a starting point we show how we are building on this to create a lightweight dynamic Docker cloud and compare and contrast this with work we are doing with our latest partner Waratek to help them create a similar lightweight dynamic Java cloud using their Java application container technology.
Removing Environmental Differences - Simon PearsonOutlyer
Is Docker the answer to the Stack x Platform x Cloud explosion thats engulfed the Enterprise?
Is IaaS really the right model? or just the one that worked?
Has PaaS’s time finally arrived.
Can Docker make OS, library, stack choices irrelevant to hosting and Ops?
This talk looks at how Pearson is investigating and moving towards Docker, what we’ve learned so far, and what you can learn from our experiences.
The document discusses building a Raspberry Pi Kubernetes cluster to run OpenFaas serverless functions. Some key points are:
1. A Raspberry Pi cluster can provide cloud-like capabilities at home by pooling hardware resources and allowing elastic scaling.
2. Kubernetes provides declarative deployments, configuration, service discovery, high availability, and elastic capacity for containers.
3. OpenFaas is a serverless framework that uses Docker containers and Kubernetes to build and run functions as a service.
Keynote by Diane Bryant, SVP and GM of the Data Center Group at Intel, at OpenStack Silicon Valley 2015.
Cloud computing provides tremendous agility and efficiency to organizations are the driver of the digital service economy. In her keynote, Diane Bryant will discuss how Intel was an early leader in adoption of cloud computing under her tenure as CIO and how this experience has shaped broader strategy to deliver tens of thousands of new clouds across the enterprise with Intel’s new Cloud for All Initiative. Attendees can expect to learn about OpenStack’s critical role in shaping the future of the enterprise data center and learn more about key industry efforts to drive enterprise readiness to the OpenStack platform.
Defining & Enforcing Policies the GitOps WayWeaveworks
GitOps is a great way to reliably and securely deploy both the infrastructure and applications in the context of Kubernetes. In this talk we will have a look at how we can use CNCF Open Policy Agent (OPA) to define and enforce policies along the entire supply chain. For example, an OPA Rego-based bot can review Git commits and automatically provide feedback, and in the runtime space the Gatekeeper project can be of great value.
Link to YouTube Video of this talk: https://youtu.be/Xe0PDeENMoE
Speaker: Michael Hausenblas, Developer Advocate, AWS
Bio: Michael is a Developer Advocate at AWS, part of the container service team, focusing on container security. Michael shares his experience around cloud native infrastructure and apps through demos, blog posts, books, and public speaking engagements as well as contributes to open source software. Before AWS, Michael worked at Red Hat, Mesosphere, MapR and in two research institutions in Ireland and Austria.
DockerDay2015: Getting started with DockerDocker-Hanoi
Docker is a tool that allows users to package applications into containers that can run on any Linux server. The key concepts include images which contain the files for an app and its dependencies, containers which are instances of images that run the app, and a Dockerfile which defines what goes into an image. Docker provides lightweight virtualization, portability across environments, and the ability to build distributed apps out of separate services. The meetup will cover Docker terminology, how to interact with Docker via its API and CLI, and include a hands-on lab.
OpenStack Summit - Sydney: Designate Project UpdateGraham Hayes
This document provides a project update on Designate from OpenStack Summit Sydney in November 2017. Designate is an OpenStack service that provides scalable, on-demand, self-service access to authoritative DNS services. It started in 2012 and became generally available in 2013. The speaker discusses Designate's integration with OpenStack, goals for the Queens release, possibilities for beyond Queens, cross-project work including improved integration with Neutron, and how to provide feedback and contribute to the project.
OpenStack: Changing the Face of Service DeliveryMirantis
Keynote by Lew Tucker, VP and CTO of Cloud Computing at Cisco, at OpenStack Silicon Valley 2015.
As more companies move to software-driven infrastructures, OpenStack opens up new possibilities for traditional network service providers, media production, and content providers. Micro-services, and carrier-grade service delivery become the new watchwords for those companies looking to disrupt traditional players with virtualized services running on OpenStack.
This document compares using Kubernetes directly on OpenStack infrastructure (IaaS) versus using OpenShift on OpenStack. OpenShift is described as Kubernetes but with an enterprise focus. It provides a Platform as a Service (PaaS) that abstracts away much of the complexity of managing containers. Using OpenShift on OpenStack combines the benefits of containers with the flexibility of virtual machines and leverages OpenStack services like Swift, Cinder, and load balancers. The document outlines deployment architectures for both Magnum and OpenShift on OpenStack and invites the reader to try out exercises for each approach.
The Paris OpenStack Summit had over 5000 attendees from 876 companies representing 62 countries. Major themes included the growing community with new platinum members like Intel and SAP, increased interest in Docker and NFV, and Ceph emerging as a unified storage solution. Projects are focusing on usability, debugability, and scalability through efforts like refactoring Nova scheduler and Horizon, and enhancing HEAT.
Watch the webinar here: https://codefresh.io/unlimited-staging-environments-lp/
Sign up for a FREE Codefresh account today: https://codefresh.io/codefresh-signup/
How to run complete, disposable apps on Kubernetes or Staging and Development.
See the full video webinar on our blog at https://codefresh.io/webinars/unlimited_staging_environments_webinar/
This is an Introductory presentation about Docker and Openstack, where they come together. This also give details about community projects in this area (Docker + Openstack) and more details about Nova-Docker. It assumes background of both Dockers and Openstack in general.
OpenStack in an Ever Expanding World of Possibilities - Vancouver 2015 SummitLew Tucker
Over the past several years we have seen the continued adoption of OpenStack and it’s expansion into new areas: from cloud service providers, enterprise private clouds to large media companies, telecommunication giants, and big science. At the same time, open source based platforms for network functions virtualization (NFV) are fueling a movement toward cloud computing in almost all major telco’s.
In the developer world, open source projects, such as Docker, Mesos, Kubernetes, and Spark are gaining a lot of attention and being integrated into OpenStack through projects Kolla and Magnum.
This session will cover how these projects and activities relate to each other and further expand the utility and adoption OpenStack.
Two Years In Production With Kubernetes - An Experience ReportKasper Nissen
This document summarizes a presentation about two years of experience using Kubernetes in production. It discusses how the company shifted to being application-oriented rather than machine-oriented, and introduced tools like Shuttle and Ham to improve developer experience and implement continuous delivery. It also covers how they used Kops to manage Kubernetes clusters across multiple availability zones and Dextre to improve node rollouts. While there were initial challenges, the presenter concludes that Kubernetes was the right choice and has allowed the company to scale their services.
OpenShift Overview Presentation by Marek Jelen for Zurich Geeks EventOpenShift Origin
The document discusses OpenShift, Red Hat's free Platform as a Service (PaaS) for deploying applications in the cloud. It provides an overview of what cloud and PaaS are, and explains that OpenShift allows developers to easily deploy and automatically scale their applications. The document notes that OpenShift has a free tier for development use and more resources can be accessed by signing up. It also shares ways developers can install OpenShift locally for experimentation purposes using Vagrant.
Using the Terraform Enterprise GUI is perfect to start working with Terraform...Mitchell Pronschinske
Using the Terraform Enterprise GUI is perfect to start working with Terraform as a human, but it's not when implementing a machine to machine interaction. Joern will present some examples of how to demystify the Terraform Enterprise API.
Andrew Spyker presented on the Netflix Cloud Platform and ZeroToDocker project. The following key points were discussed:
- ZeroToDocker provides Docker images of Netflix OSS projects like Eureka, Zuul and Asgard to more easily evaluate the technologies. However, the images are not intended for direct production use.
- A demo showed running a microservices application and supporting Netflix OSS services like Eureka and Zuul using Docker containers on a single machine.
- While Docker aids development and evaluation, additional tooling is needed to operationalize containers at production scale across multiple hosts for tasks like networking, security, logging and scheduling. Competing ecosystems are emerging to address these needs.
South Korea OpenStack UG - Study & Development team activitiesIan Choi
This slide shares Korean OpenStack User Group activities with study and development.
OpenStack Korea User Group: https://groups.openstack.org/groups/south-korea
This document summarizes a presentation about Open Platform for Network Functions Virtualization (OPNFV). It discusses NFV challenges for telecom operators and introduces OPNFV as an open source platform that aims to develop and test an integrated virtual network functions infrastructure. Key aspects of OPNFV covered include its reference architecture, goals of contributing to relevant open source projects and establishing an NFV ecosystem, and examples of feature development and community labs/testing activities.
Ultimate DevOps: OpenShift Dedicated With CloudBees Jenkins Platform (Andy Pe...Red Hat Developers
Are you ready to innovate with cloud-native app development? Are you ready to accelerate business agility with continuous delivery (CD)? Well, now you can easily do both using CloudBees Jenkins Platform within OpenShift Dedicated by Red Hat. In this session, you'll learn how to seamlessly use this CD solution to fully automate your application development, test, and delivery life cycle. Using the CloudBees platform to automate your CD pipelines allows your developers to focus on what they do best—innovating. Combine that with the elasticity and scale of the Docker-based OpenShift Dedicated environment, and you'll remove many of the obstacles to business growth. Come see the future of digital innovation.
Leveraging Helm to manage Deployments on KubernetesManoj Bhagwat
Kubernetes Helm, by making application deployment easy, standardized and reusable, improves developer productivity, reduces deployment complexity, enhances operational readiness, and speeds up the adoption of cloud native apps
Building a Raspberry Pi cluster with Kubernetes, OpenFaaS and .NETAlex Ellis
Scott Hanselman and Alex Ellis build a Raspberry Pi cluster with Kubernetes, OpenFaaS and .NET. But why would you do this? And what is Kubernetes anyway? Find out everything you needed to know and more in this presentation.
Site Architecture Best Practices for Search Findability - Adam AudetteAdam Audette
The information architecture (IA) of a website is the most essential factor that influences search spidering and (indirectly) indexing and ranking. Above and beyond search findability (the focus here), proper IA is directly related to usability and conversion optimization.
Subnet Calculation from a given IP range, using the classless Subnet mask. Calculating number of hosts in a subnet and number of subnets possible to create in a given IP range.
DockerDay2015: Getting started with DockerDocker-Hanoi
Docker is a tool that allows users to package applications into containers that can run on any Linux server. The key concepts include images which contain the files for an app and its dependencies, containers which are instances of images that run the app, and a Dockerfile which defines what goes into an image. Docker provides lightweight virtualization, portability across environments, and the ability to build distributed apps out of separate services. The meetup will cover Docker terminology, how to interact with Docker via its API and CLI, and include a hands-on lab.
OpenStack Summit - Sydney: Designate Project UpdateGraham Hayes
This document provides a project update on Designate from OpenStack Summit Sydney in November 2017. Designate is an OpenStack service that provides scalable, on-demand, self-service access to authoritative DNS services. It started in 2012 and became generally available in 2013. The speaker discusses Designate's integration with OpenStack, goals for the Queens release, possibilities for beyond Queens, cross-project work including improved integration with Neutron, and how to provide feedback and contribute to the project.
OpenStack: Changing the Face of Service DeliveryMirantis
Keynote by Lew Tucker, VP and CTO of Cloud Computing at Cisco, at OpenStack Silicon Valley 2015.
As more companies move to software-driven infrastructures, OpenStack opens up new possibilities for traditional network service providers, media production, and content providers. Micro-services, and carrier-grade service delivery become the new watchwords for those companies looking to disrupt traditional players with virtualized services running on OpenStack.
This document compares using Kubernetes directly on OpenStack infrastructure (IaaS) versus using OpenShift on OpenStack. OpenShift is described as Kubernetes but with an enterprise focus. It provides a Platform as a Service (PaaS) that abstracts away much of the complexity of managing containers. Using OpenShift on OpenStack combines the benefits of containers with the flexibility of virtual machines and leverages OpenStack services like Swift, Cinder, and load balancers. The document outlines deployment architectures for both Magnum and OpenShift on OpenStack and invites the reader to try out exercises for each approach.
The Paris OpenStack Summit had over 5000 attendees from 876 companies representing 62 countries. Major themes included the growing community with new platinum members like Intel and SAP, increased interest in Docker and NFV, and Ceph emerging as a unified storage solution. Projects are focusing on usability, debugability, and scalability through efforts like refactoring Nova scheduler and Horizon, and enhancing HEAT.
Watch the webinar here: https://codefresh.io/unlimited-staging-environments-lp/
Sign up for a FREE Codefresh account today: https://codefresh.io/codefresh-signup/
How to run complete, disposable apps on Kubernetes or Staging and Development.
See the full video webinar on our blog at https://codefresh.io/webinars/unlimited_staging_environments_webinar/
This is an Introductory presentation about Docker and Openstack, where they come together. This also give details about community projects in this area (Docker + Openstack) and more details about Nova-Docker. It assumes background of both Dockers and Openstack in general.
OpenStack in an Ever Expanding World of Possibilities - Vancouver 2015 SummitLew Tucker
Over the past several years we have seen the continued adoption of OpenStack and it’s expansion into new areas: from cloud service providers, enterprise private clouds to large media companies, telecommunication giants, and big science. At the same time, open source based platforms for network functions virtualization (NFV) are fueling a movement toward cloud computing in almost all major telco’s.
In the developer world, open source projects, such as Docker, Mesos, Kubernetes, and Spark are gaining a lot of attention and being integrated into OpenStack through projects Kolla and Magnum.
This session will cover how these projects and activities relate to each other and further expand the utility and adoption OpenStack.
Two Years In Production With Kubernetes - An Experience ReportKasper Nissen
This document summarizes a presentation about two years of experience using Kubernetes in production. It discusses how the company shifted to being application-oriented rather than machine-oriented, and introduced tools like Shuttle and Ham to improve developer experience and implement continuous delivery. It also covers how they used Kops to manage Kubernetes clusters across multiple availability zones and Dextre to improve node rollouts. While there were initial challenges, the presenter concludes that Kubernetes was the right choice and has allowed the company to scale their services.
OpenShift Overview Presentation by Marek Jelen for Zurich Geeks EventOpenShift Origin
The document discusses OpenShift, Red Hat's free Platform as a Service (PaaS) for deploying applications in the cloud. It provides an overview of what cloud and PaaS are, and explains that OpenShift allows developers to easily deploy and automatically scale their applications. The document notes that OpenShift has a free tier for development use and more resources can be accessed by signing up. It also shares ways developers can install OpenShift locally for experimentation purposes using Vagrant.
Using the Terraform Enterprise GUI is perfect to start working with Terraform...Mitchell Pronschinske
Using the Terraform Enterprise GUI is perfect to start working with Terraform as a human, but it's not when implementing a machine to machine interaction. Joern will present some examples of how to demystify the Terraform Enterprise API.
Andrew Spyker presented on the Netflix Cloud Platform and ZeroToDocker project. The following key points were discussed:
- ZeroToDocker provides Docker images of Netflix OSS projects like Eureka, Zuul and Asgard to more easily evaluate the technologies. However, the images are not intended for direct production use.
- A demo showed running a microservices application and supporting Netflix OSS services like Eureka and Zuul using Docker containers on a single machine.
- While Docker aids development and evaluation, additional tooling is needed to operationalize containers at production scale across multiple hosts for tasks like networking, security, logging and scheduling. Competing ecosystems are emerging to address these needs.
South Korea OpenStack UG - Study & Development team activitiesIan Choi
This slide shares Korean OpenStack User Group activities with study and development.
OpenStack Korea User Group: https://groups.openstack.org/groups/south-korea
This document summarizes a presentation about Open Platform for Network Functions Virtualization (OPNFV). It discusses NFV challenges for telecom operators and introduces OPNFV as an open source platform that aims to develop and test an integrated virtual network functions infrastructure. Key aspects of OPNFV covered include its reference architecture, goals of contributing to relevant open source projects and establishing an NFV ecosystem, and examples of feature development and community labs/testing activities.
Ultimate DevOps: OpenShift Dedicated With CloudBees Jenkins Platform (Andy Pe...Red Hat Developers
Are you ready to innovate with cloud-native app development? Are you ready to accelerate business agility with continuous delivery (CD)? Well, now you can easily do both using CloudBees Jenkins Platform within OpenShift Dedicated by Red Hat. In this session, you'll learn how to seamlessly use this CD solution to fully automate your application development, test, and delivery life cycle. Using the CloudBees platform to automate your CD pipelines allows your developers to focus on what they do best—innovating. Combine that with the elasticity and scale of the Docker-based OpenShift Dedicated environment, and you'll remove many of the obstacles to business growth. Come see the future of digital innovation.
Leveraging Helm to manage Deployments on KubernetesManoj Bhagwat
Kubernetes Helm, by making application deployment easy, standardized and reusable, improves developer productivity, reduces deployment complexity, enhances operational readiness, and speeds up the adoption of cloud native apps
Building a Raspberry Pi cluster with Kubernetes, OpenFaaS and .NETAlex Ellis
Scott Hanselman and Alex Ellis build a Raspberry Pi cluster with Kubernetes, OpenFaaS and .NET. But why would you do this? And what is Kubernetes anyway? Find out everything you needed to know and more in this presentation.
Site Architecture Best Practices for Search Findability - Adam AudetteAdam Audette
The information architecture (IA) of a website is the most essential factor that influences search spidering and (indirectly) indexing and ranking. Above and beyond search findability (the focus here), proper IA is directly related to usability and conversion optimization.
Subnet Calculation from a given IP range, using the classless Subnet mask. Calculating number of hosts in a subnet and number of subnets possible to create in a given IP range.
ARC206 Extend your Existing Data Center to the cloud with Amazon VPC - AWS re...Amazon Web Services
This document discusses various network architectures and connectivity options for connecting an on-premises customer network to the AWS cloud. It presents diagrams of virtual private cloud (VPC) subnet and availability zone configurations, the use of security groups and network access control lists to control traffic, and options for internet VPN, AWS Direct Connect, load balancing, and remote access. The goal is to help customers reinvent their network by extending it to AWS securely and with high availability.
This document outlines steps for creating a VPC configuration on AWS:
Step 1 involves creating public and private subnets, a default route table, security groups, and an internet gateway to allow access to public subnets.
Step 2 adds a bastion server for secure access between public and private subnets, with internal traffic restricted by security groups.
Step 3 introduces a NAT gateway to allow instances in private subnets internet access without a public IP.
The document notes that further sessions will cover scaling for availability across availability zones.
The document discusses the Domain Name System (DNS), which translates domain names to IP addresses. It has three main components: the name space that defines the domain name structure, resolvers that extract information from name servers, and name servers that store information about the domain name structure. The DNS uses a hierarchical and distributed database to map domain names to IP addresses across a network like the internet.
Architecture is the art and science of designing and constructing buildings and other structures. Some key materials used in architecture include stone, brick, wood, cast iron, steel, concrete, and shell structures. Common architectural styles include post-and-lintel, arches, vaults, trusses, domes, and buttresses. Sculpture is the art of shaping figures out of materials like marble, bronze, wood, ivory, and terra cotta using techniques such as carving, modeling, casting, construction, and assemblage. Common sculptural forms include relief sculptures, free-standing sculptures, kinetic sculptures, and assemblage sculptures.
Simon Byrne visited a hospital site to brainstorm ideas for a sculpture commission. Byrne took photos and measurements, researched the history of sculpture materials, and began sketching multiple design concepts. These included kinetic, light, water, and sound sculptures inspired by other artists' works. Byrne has commenced 3D modeling of an idea involving a spine-like form and is yet to start prototyping with purchased clay materials.
This document provides an overview of construction site organization for a building project. It discusses preparing for the site by collecting technical, geographic, climatic and other relevant data. It describes the initial site work of establishing fences, access roads and excavating foundation pits. The document outlines considerations for on-site logistics like traffic flow, materials storage, temporary buildings and transport of workers, equipment and materials. Proper planning of construction site organization is emphasized as important for efficiency and cost-effectiveness of the building process.
The document discusses subnetting and CIDR notation. It provides information on the benefits of subnetting such as reduced network traffic, optimized performance, simplified management, and facilitating large geographical distances. It defines subnet masks and CIDR notation. It also discusses how to calculate the number of subnets and hosts in a subnet for a given subnet mask in CIDR notation. Finally, it provides an example of how to subnet the Class C network 192.168.10.0/25 into two subnets.
This document outlines a project to simulate a centralized data center architecture by configuring operating system components like DNS, DHCP, and LDAP on Linux servers. The objectives are to set up DNS and DHCP servers, an LDAP server, and an Oracle database on three Linux servers connected by a switch. The current status shows the Linux clients connected to one Linux server running LDAP, DHCP and DNS. The document describes configuring each component and the significance of learning about technology infrastructure and tasks in a corporate environment.
Mayan sculpture was an important art form for the ancient Mayan civilization. Sculpture was created through both subtractive and additive techniques, with stone and wood as common materials. Mayan sculpture depicted important religious and political figures and events and provided cultural and historical context.
SCULPTURE: ADDITIVE,SUBTRACTIVE AND KINETICChan Delfino
Sculpture is a three-dimensional art form created by shaping hard materials like stone, metal, glass, or wood. There are three main types of sculpture: subtractive, which involves removing material like in carving; additive, which is most common today and involves adding material; and kinetic, containing moving parts. Sculpture is created through four basic processes - carving, modeling, casting, and construction - and can also involve assemblage, relief, or kinetic elements.
This document provides an introduction to subnetting basics. It begins by covering prerequisite knowledge, including classful network addressing, subnet masks in dotted decimal and prefix length notation, and the default subnet masks for Classes A, B, and C. It then explains how to identify the subnet and host bits when given an IP address and prefix length. The document demonstrates how to calculate the number of subnets and hosts available by using binary math equations. It provides an example of analyzing an IP address of 192.168.32.158/28 to determine its subnet ID and host ID.
This document provides information about different types of stairs. It defines key stair components like steps, treads, and risers. It then describes 8 common types of stairs including straight stairs, dog-legged stairs, quarter turn stairs, and spiral stairs. Each type is defined and the suitable applications are outlined. The document aims to inform about the different shapes, materials, and styles of stairs that can be used in buildings.
The document discusses the Domain Name System (DNS) and its components. It explains what DNS is, how it works to translate domain names to IP addresses, the different record types used in DNS like A, NS, MX records. It describes DNS name servers, resolvers, zones and namespaces. It provides examples of DNS configuration files for both master and slave name servers as well as sample zone files mapping names to IP addresses.
Through subnetting, a network administrator can logically divide a single network into multiple subnets with fewer hosts on each. This reduces broadcast traffic across the entire network. The key concept is borrowing bits from the host portion of the IP address to create the subnet portion. For each address class, only a certain number of bits can be borrowed to create subnets while ensuring some bits remain for host IDs. Calculations using simple formulas allow determining the number of subnets and hosts per subnet available for any given subnet mask.
This document summarizes several Azure DevOps services including Azure Boards for tracking work, Azure Repos for source control, Azure Pipelines for continuous integration and delivery, Azure Test Plans for testing, and Azure Artifacts for package management. It provides brief descriptions of the key capabilities of each service, such as Kanban boards and reporting in Azure Boards, Git hosting and code search in Azure Repos, support for any language or platform in Azure Pipelines, and end-to-end traceability in Azure Test Plans. The presentation concludes by thanking the audience and inviting questions.
The document is an agenda for an event discussing Azure DevOps tools and projects. The agenda includes:
- Breakfast and opening from 8:30-9:00
- A presentation on Azure DevOps tools from 9:00-9:45
- A presentation on Azure PaaS projects and agile development from 9:45-10:30
- A panel discussion from 10:30
- Lunch
The document provides details on the presentations and panels planned during the event.
This document provides an overview of Azure DevOps and its key components:
1. Azure DevOps is a suite of tools and services that helps enable continuous delivery by bringing together people, process, and products. It includes Azure Pipelines, Azure Boards, Azure Repos, Azure Test Plans, and Azure Artifacts.
2. Azure Pipelines allows users to build, test, and deploy applications with continuous integration/continuous delivery (CI/CD) using any language, platform, or cloud. It offers free unlimited build minutes for open source projects.
3. The other components allow users to plan and track work (Azure Boards), host Git repositories (Azure Repos), test applications (Azure Test
DevOps brings together people, processes, and technology to automate software delivery and provide continuous value to users. Azure DevOps provides tools to help with continuous integration (CI), continuous delivery (CD), and continuous learning and monitoring. It offers Azure Boards for planning and tracking work, Azure Repos for source control, Azure Pipelines for CI/CD, Azure Test Plans for testing, and Azure Artifacts for package management. Azure DevOps supports organizations of all sizes with an integrated, enterprise-grade DevOps toolchain.
Azure DevOps: the future of integration and traceabilityLorenzo Barbieri
Slides I presented at Landing Festival in Berlin, on April, 3rd 2019 about Azure DevOps features, its integration with GitHub and possible integrations with OSS and 3rd party tools.
DevOps brings together people, processes and technology, automating software delivery to provide continuous value to your users. With Azure DevOps solutions, deliver software faster and more reliably—no matter how big your IT department or what tools you are using
DevOps brings together people, processes and technology, automating software delivery to provide continuous value to your users. With Azure DevOps solutions, deliver software faster and more reliably—no matter how big your IT department or what tools you are using
DevOps brings together people, processes, and technology to automate software delivery and provide continuous value to users. Using Azure DevOps, organizations can deliver software faster and more reliably regardless of team size or tools used. Azure DevOps provides tools for continuous integration, continuous delivery, and continuous monitoring to support DevOps practices. It offers free and paid plans that scale from individual and open source projects to large enterprises.
This document provides information about Azure DevOps and DevOps practices. It discusses how DevOps brings together people, processes, and technology to automate software delivery and provide continuous value to users. It also outlines some key DevOps technologies like continuous integration, continuous delivery, and continuous monitoring. Additionally, the document shares how Azure DevOps can help teams deliver software faster and more reliably through tools for planning, source control, building, testing, and deploying.
Bijeet Pradhan has over 5 years of experience in infrastructure design, implementation, and monitoring. He has expertise in automation tools like Chef, Puppet, and Ansible, as well as cloud platforms including AWS, GCP, and Azure. He aims to collaborate with clients to develop automation strategies and deployment processes.
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...Janusz Nowak
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anything to Anywhere with Azure DevOps
Janusz Nowak
@jnowwwak
https://www.linkedin.com/in/janono
https://github.com/janusznowak
https://blog.janono.pl
Microsoft Ignite 2018 BRK3192 Container DevOps on AzureJessica Deen
This document provides an overview of DevOps concepts and tools. It discusses containers and container orchestration with Kubernetes. It also mentions Azure DevOps and Azure Kubernetes Service (AKS) as tools that can help with DevOps practices like continuous integration/delivery (CI/CD). Helm charts are presented as a way to define and manage complex Kubernetes applications and services. Some best practices for Kubernetes are also listed.
DevOps is an approach that brings together people, processes, and technologies to enable continuous delivery of value to end users. It aims to shorten the development life cycle and improve automation of software delivery. Azure DevOps provides tools like Azure Boards, Azure Repos, Azure Pipelines, and Azure Test Plans to support DevOps practices like continuous integration, continuous delivery, and continuous monitoring through automation.
Evangelos Kapsalakis, Partner Specialist at Microsoft, provides valuable insights on Microsoft Azure and its flexibility when it comes to migration deployment. From Cloud Migration Through Automation: Next Level Flexibility virtual event, hosted on September 30, 2020
This document summarizes the Azure DevOps tools for continuous delivery, including Azure Boards, Azure Repos, Azure Pipelines, Azure Test Plans, and Azure Artifacts. It discusses how these tools work together to enable DevOps practices like continuous integration, delivery, testing and monitoring. It also highlights how Azure DevOps integrates with GitHub and is used by Microsoft itself for development workflows.
Eclipse Che & Codenvy allow developers to contribute to projects within seconds by providing preconfigured developer workspaces in the cloud. These workspaces integrate common developer tools, version control systems, and runtime environments behind a shared interface. Codenvy offers both on-premise and SaaS options for provisioning secure, multi-tenant workspaces that can be customized through extensibility features of the underlying Eclipse Che platform.
The document provides an overview of Azure DevOps and why JavaScript developers should use it. It discusses features like source control, boards for tracking work items, pipelines for continuous integration and delivery, and testing. It also includes a demo of setting up a sample Create React App project in Azure DevOps, including configuring a pipeline to build and deploy the app to an Azure App Service. Resources for learning more about Azure DevOps, using it with JavaScript projects, and understanding Git are also provided.
Similar to The site architecture you can edit (20)
OpenStack Boston User Group, OpenStack overviewOpen Stack
This document introduces OpenStack, an open source cloud operating system. It discusses how OpenStack automates and controls pools of compute, storage, and networking resources to efficiently allocate resources and empower users and developers through self-service portals and APIs. OpenStack originated from NASA and Rackspace and is now powering both private and public clouds with an ecosystem of over 100 contributors. The document encourages participation in the OpenStack community through conferences, mailing lists, and social media.
John Dickinson gave a presentation on Swift, an open source scalable object storage system. He discussed Swift's architectural design including the proxy server, ring, storage servers, and consistency servers. He also covered Swift's history, best practices for deployment in small and large clusters, and future work including improving the ring and making deployment more modular. The presentation provided an overview of Swift's capabilities and limitations.
OpenStack is an open source cloud computing platform that aims to produce a ubiquitous cloud computing platform for both public and private clouds. It provides simple and scalable tools for automating the management of compute, storage, and networking resources and allows enterprises to control and monitor their cloud environment. OpenStack consists of a series of interrelated projects that provide automation, management, and control of large pools of compute, storage, and networking resources throughout a data center.
OpenStack is an open source cloud computing platform that provides infrastructure as a service. It supports both public and private clouds. The main OpenStack projects are Nova for compute services, Swift for object storage, and Glance for image services. Nova uses technologies like KVM, Xen, and VMware for virtualization and provides an API compatible with Amazon EC2. Swift is a highly scalable object storage system using a ring architecture to distribute data across commodity servers.
OpenStack is an open source cloud computing platform that aims to meet the needs of both public and private clouds. It started in 2010 with the goal of producing an ubiquitous open source solution. The core projects include OpenStack Compute (Nova), OpenStack Object Storage (Swift), and OpenStack Image service (Glance). OpenStack has a very modular architecture with asynchronous communication and horizontal scalability as main goals. It uses shared-nothing architecture and aims for eventual consistency.
This document discusses making Nova infrastructure components highly available and fault tolerant. It proposes an active-passive setup for MySQL and RabbitMQ using Linux HA (Heartbeat) and shared storage. The goals are to have Nova-network, Nova-scheduler and Nova-api be highly available without complex configurations. It describes testing high availability of Nova-network by simulating failures and verifying connectivity. Problems with network disconnect times are discussed and sending gratuitous ARPs is proposed to address it. Lastly, it invites discussion on ideas and concerns for making the different Nova components highly available.
NASA's Nebula cloud computing platform began in 2009 as an internal project to provide on-demand computing resources for NASA researchers and reduce costs associated with underutilized servers. It was later open sourced along with Rackspace to form OpenStack, one of the first platforms built by the US government for cloud computing. Today Nebula provides cloud services across multiple NASA centers and continues to work on expanding access and capabilities to better support NASA's missions and research goals into the future.
The OpenStack Dashboard project provides a Python/Django-based web interface for managing OpenStack projects, users, images, instances, volumes, and more. Originally created by NASA for the Nebula project, it is now an OpenStack incubation project. Developers are working to transition it from the original Amazon EC2 API to the new OpenStack API to take advantage of new features. The project is looking to rename its core module, introduce submodules for different OpenStack services, and integrate PaaS technologies to provide a more full-featured dashboard.
The document discusses implementing snapshot, clone, and boot from volume functionality in OpenStack. It describes taking volume snapshots that can then be used to clone new volumes or boot instances. The implementation would involve adding new methods to the volume driver, updating database schemas to support snapshots, and determining how to store snapshots. Challenges include supporting snapshot of snapshots in Linux LVM and implementing independent delete of volumes and snapshots.
This document provides information about the OpenStack Conference & Design Summit, including:
- Over 550 people are expected to attend with over 425 attending the Design Summit
- The conference will take place April 26-29 and include 135 sessions over 4 days
- Attendees can access the agenda online or through an Etherpad link
- OpenStack has grown significantly since its first Design Summit in November 2010
- The Cactus release included over 4,700 contributions and 40+ new features
- Jonathan Bryce will provide a project update and the leaders of Swift, Nova, and Glance will discuss their projects
This document describes Petascale Cloud Filesystem, a distributed file system designed by Gluster for large-scale cloud storage. It discusses Gluster's architecture advantages like being software-only, fully distributed with no single point of failure, and able to elastically scale out storage. The document also provides examples of Gluster deployments at organizations like Partners Healthcare, Pandora, and Cincinnati Bell Technology Solutions to provide centralized storage services and support private and public cloud environments.
Gregory Holt proposes a simpler Container Sync feature for OpenStack Swift that replicates objects across geographically distinct clusters with fewer configuration options than originally planned. The new approach replicates all objects from a source container to a destination container without tracking remote replica counts. It allows per-container sync configuration and could support migrating accounts between clusters. Key components include updating the Swift API and adding a new daemon to handle background synchronization between clusters.
The document discusses Mach Technology's presentation on their next-generation integrated managed services solution utilizing OpenStack. Mach Technology is an Australian technology solutions company that is developing a fully automated and multi-tenant cloud platform using OpenStack to provide a range of IaaS, PaaS, and SaaS services to customers. The presentation provides an overview of Mach Technology, their cloud definition and security lessons, as well as their solution architecture and rollout strategy for their next-generation project utilizing OpenStack.
This document discusses how growth in internet users, devices, and data is driving new requirements for data centers. It notes that virtualization and hosting applications are driving the majority of new server purchases. Many new servers will be virtualized and used in environments where power and space are limited. This creates opportunities to improve efficiency and reduce excess IT spending through solutions that simplify management of virtualized infrastructure. The document also outlines challenges around security, efficiency, manageability and lock-in that cloud and virtualized computing aim to address.
The document discusses Dell's work with OpenStack including developing an open source cloud installer called Crowbar that can deploy OpenStack in under 4 hours without internet access and automates the process of deploying and maintaining cloud infrastructure. It also talks about the importance of focusing on cloud operations and processes through automation in order to efficiently operate cloud infrastructure at scale.
Neal Sample discusses eBay's global commerce platform and hybrid cloud strategy. Some key points:
- eBay has a large online marketplace with over 200 million listings, generating $62 billion in annual sales. It has over 23 million lines of code and stores 9 petabytes of data.
- eBay uses "cloud bursting" to reduce costs by increasing datacenter efficiency. This allows it to offload extra workload to the cloud during peak periods.
- The hybrid cloud model lowers costs by direct traffic to the most economical location, either internal datacenters of different tiers or external cloud providers.
- A financial model and cost-benefit analysis show that maintaining 4,000 internal servers while bursting to the cloud
The document discusses OpenStack and cloud computing. It provides an overview of OpenStack's capabilities including compute, object storage, and networking. It also discusses Citrix's cloud strategy of using OpenStack to deliver an open, compatible pay-as-you-go cloud platform. The document includes a diagram demonstrating how OpenStack provides an open alternative to single-vendor lock-in for enterprise customers and cloud providers.
The document discusses different approaches to Platform as a Service (PaaS) and proposes building a PaaS on OpenStack to provide more control without complexity. It describes existing PaaS offerings like Google App Engine, Heroku, and Amazon Elastic Beanstalk that emphasize simplicity over control. The proposed OpenStack-based PaaS would use GigaSpaces technology to offer deployment, management, high availability, scalability, multi-tenancy, and monitoring capabilities while allowing flexibility to choose operating systems, middleware stacks, and other configuration options. It demonstrates deploying and managing a Cassandra service and discusses the current status of integrating GigaSpaces with OpenStack.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
The Wikimedia sites are a massive volunteer effort to share knowledge with the world. These volunteer efforts have created the 5 th largest web presence in the world, and one of the largest collections of knowledge online in the world. This effort is so successful because of willingness and drive of people to share knowledge, the openness of the content, and the freedom our environment provides to create.
A major freedom of our sites is the ability to edit. If you are reading an article, and see a typo, you can fix it. If you are reading an article about your hometown, and some information you feel is interesting or important is missing, you can add it. If something is wrong, you can correct it. But in the current state of the web editable content isn't terribly novel. Facebook, twitter, slashdot, digg, reddit, etc., are all user generated content.
However, we extend the concept of editing to the interface as well. It's possible to edit interface messages, site javascript, site css, the navigation sidebar, sitenotices and other interface elements that control how users see and interact with the sites. It's also possible to edit one's own css and javascript so that one's experience is custom to themselves. A community member may do activities that require quite a bit of repetition, and as such may write tools to automate or add usability to their process. Often these tools are very useful to others in the community as well, and can be shared, like the content of our sites.
We also extend the concept of editing to our software. Changes to our software, like our content are open to all. The software is GPL licensed, and is used extensively by third parties. In fact, I became a volunteer for the Wikimedia effort by using and improving the software for another organization. We give out commit access fairly liberally. If you have the PHP chops, and discuss what you'll be working on, you'll very likely get commit access.
We also extend the editability of our stack to our translations. We have a very diverse community, and a lot of that diversity in language diversity. We have projects in 250 languages, and as such the software must also be localized for those languages. We use translatewiki.net to localize the MediaWiki software, and like Wikimedia sites, translate wiki is volunteer based, and uses MediaWiki. We have localization support in over 300 languages, all of which was totally volunteer created. The translatewiki community has roughly 2,500 members, and continues to grow. We continue our trend of letting community members edit everything by going deeper and deeper into our stack.
We extend the concept of editing further into our stack, and into our site architecture, from the documentation point of view. People are occasionally shocked at the level of openness of our community, and our environment. For instance, we have all of our architecture documentation on a public wiki. I recently did an LDAP implementation, and while doing so, I added complete documentation on the implementation. It covers installation, configuration, directory information tree design, schema design, security decisions, and backup design. This gives a freedom to the community to participate in our site architecture, at least from a viewing perspective. Our working environment is very public, and this includes our logging and communications, which occurs via IRC.
The operations team's daily working environment is IRC, and we log server actions via bots in IRC. These logs are available on a wiki. This provides the community with the freedom to participate in our site operations by watching us work, and offering help. Operations volunteers can work with us in this environment, and add to the log as well. This is unfortunately the depth level at which editing stops. Our openness continues though.
We go one step further than simply opening our operations documentation. We also provide live versions of a lot of our configuration files. Right now it isn't possible to see all of our configuration, but this is something I'd like to change. The provides the community the ability to actively participate in our site architecture by providing patches to issues, or to enable features.
Our monitoring infrastructure is also publicly accessible. We have one service that is meant to be a dashboard of whether are services are up or not, and we have another that is meant more for backend support.
Like monitoring, our performance statistics are publicly viewable. This, along with monitoring, provides the community the freedom to participate by reporting issues, or informing other users of ongoing issues. It can also be used to help us diagnose issues as they are occurring
The theme here is that we as a community, and specifically the Wikimedia Foundation emphasize the empowerment of our community through freedoms expressed in software, and open content. Our community also empowers itself through self-funding. Wikimedia is a non-profit and is funded by its community. We are primarily funded by small donations made by our editors and readers.
The power of these freedoms is outstanding. The explosive growth of Wikipedia is a testament that people want to share knowledge, and will if they have the ability to easily do so. So let's look back to the beginning of this project, to see how things began.
The freedoms our community have used to extend through our entire environment, all the way to root access on our production cluster. When Wikipedia was first started, there was no paid tech staff. Everyone was a volunteer, including all roots.
Unfortunately, with the massive growth of the sites, and the importance of keeping the sites up and the content secure, we now lack the ability to easily give root access to volunteers. Most of our operations team is paid staff, and we haven't had a new ops volunteer with production cluster access in a while.
We also have a similar situation occurring with development. We have foundation initiated projects occasionally, and we are having a difficult time properly integrating our volunteer developers into these projects.
This is where OpenStack comes in. I believe that OpenStack is an empowerment technology, and will provide our community with a new, quite awesome, freedom to participate. I feel OpenStack is especially empowering because it is open source. There are some closed source solutions we could use to empower our users, but OpenStack being open source means we can change the software how we need, and we can steer the project in a way that ensures our project will continue to work in the way we need. Also, the foundation has a policy to use open source unless there is no alternative. Often we won't use closed source even if there is no other alternative.
The way I'm going to use OpenStack to empower our community is going to be a community oriented test and development environment.
I have three main goals: 1. Improve collaboration between staff and volunteers for software development 2. Have a process for providing higher levels of access for people who are not on the paid operation team staff. This includes staff developers, and all volunteers. I'd like to have an environment where anyone can eventually become root, even on our production cluster. 3. I'd like to have an environment where we can test major changes before we deploy them to the live site. We currently have no test environment.
We can achieve the goals by providing liberal access to an environment that is a clone of our production environment. In this environment it should be possible to add new architecture without affecting the production clone. Users should be able to make root level changes without having root, and they should be able to eventually have these changes implemented on the production cluster.
The basic use case is that the operations team will create an initial default project. This project will be a clone of our production cluster. Like our production cluster, direct root access will be limited here. However, shell access will be given our fairly liberally. Basically, if someone has MediaWiki commit access, or they wish to do volunteer operations work, they'll be given access. This environment will be used for most test and development; it will not be used for production work. I hope for this to be used as a shared environment where staff and volunteers can collaboratively work together on projects. I also would like this environment to be a place where we can do operations testing, such as failover between datacenters, and service degradation tests.
New projects should mirror community or foundation initiatives. These will be used for new site architecture. For instance, we are implementing Open Web Analytics currently, and this required architecture that is separate from our normal architecture. In our production environment, it is difficult to give out root access, which made OWA integration more difficult than necessary. In this test/dev environment, ops can create a project, assign members, and let the developers create the architecture themselves. Once the devs create the architecture, and are happy with how it is working, they can create puppet manifests describing the system design, and can push the manifests to the test/dev puppet git branch for review. After the community approves them, ops can approve them, and merge the changes in. Once the changes are merged in, ops can create instances for this project in the default project using the puppet configuration. If everything looks good, and it is interacting with the production clone properly, ops can merge the test/dev branch changes to the production puppet git branch. After merging the changes, ops can add hardware, and bring the systems online, and add the project to the production environment. This is basically having root, without having root! We are treating operations as a software development project.
Let's look into how I'll be implementing this project.
Here's the current architecture I've built. It contains the following: MediaWiki, which is the user's interface for controlling most of the architecture LDAP, which is used for tight integration of all services, and instances DNS, which is controlled by MediaWiki Puppet, which from the instance POV is controlled by MediaWiki Gerrit, which is a Git interface for code review and will contain all puppet information Nova, which is used for managing infrastructure
First, let's look at how openstack fits in.
We are running a multi-node nova installation with MySQL and LDAP. We are starting small, with one controller, and three compute nodes so that we can properly vet the architecture. In the future we'll likely grow this quite a bit. Ideally we'll have a test/dev zone in each of our datacenters. We'll also likely use a production zone in each datacenter to host instances for some of the miscellaneous services we run that aren't necessary for the site. We also have a possible future project, that isn't yet budgeted, or confirmed as an official project, called Wikimedia Labs. This will be an environment that is a clone of our production environment that will have much more liberal access than the test/dev environment. It'll be for tools and research.
Next, let's look at MediaWiki, which we are using to control this architecture.
I wrote a MediaWiki extension called OpenStackManager. In conjunction with the LdapAuthentication extension, it controls all aspects of the environment. The extension supports essentially all of the EC2 exposed functionality of Nova. As a plus, it also enables the self-documentation of the architecture. When a nova resource is created, it automatically pulls instance information from a number of places and creates MediaWiki templates. This information is also kept up to date when things change, or when resources are deleted. I extend this documentation to be more useful too, though.
I extend this documentation with Semantic MediaWiki (SMW). SMW is a system for adding structured data to a wiki. It can add semantic annotations to wiki content, which turns it into data that is queryable, and exported. These queries can output this data in a number of different formats. So, the templates that are being created have this data turned into structured data, and as such, you can do queries on it. Queries like the above example can be done. On my blog, I also show how you can use this data inside of system scripts, via JSON display formats of SMW.
Here's an example of the MediaWiki template that is created, in property/value format.
Here's an example of a really basic query that outputs in broadtable format, for displaying the information in the wiki.
Next let's look at how I'm using LDAP (using OpenDJ).
LDAP is used for all services in the architecture. LDAP is also used for the instances that are created as well. Nova concepts are expanded to system level concepts on the instances. For instance, a nova account (which is the user's wiki account), is the user's instance shell account. When a user is added to a nova project, they also are added to a posix group on the instance. It's also possible to give sudo access by adding users into a special role.
Authentication and authorization is done via LDAP, and is managed by the OpenStackManager and LdapAuthentication extensions. When a MediaWiki account is created for a user, nova, gerrit, and shell account credentials are also added.
Here's an example of an LDAP entry that is created.
Next, let's look at how we are integrating puppet.
We are using puppet to manage all instances that are created. When users create instances, the puppet information is added to LDAP for that instance. Puppet is integrated with LDAP, where all puppet nodes are stored in LDAP. There is some puppet information that is always added for instances. Specifically, MediaWiki adds variables for the instance's project, the user's wiki name, their email address, and their language. I use this in puppet to send an email to a user, in their language, telling them when their instance is finished being created. More puppet classes and variables can be added by default via the MediaWiki config, so this is extendable for your own purposes.
Next let's look at how I'm handling DNS.
Like puppet, DNS is using LDAP as a backend for its information. The OpenStackManager extensions manages both private and public DNS domains. When an instance is created, it also adds a private DNS address. When a user allocates an IP address, and associates that address with an instance, they can also add public DNS information to that address.
Here's an example of an LDAP entry for an instance. Notice that both puppet and DNS information is on the same entry. The nice thing about this, is that puppet can use all of the attributes as variables, so the DNS information can be used in puppet manifests. One especially nice thing about this, is that the private DNS entries have a location field, which can be used in puppet to do location specific configuration.
Next let's see how I'm using Nova's metadata service.
We are using cloud init fairly extensively to bootstrap puppet. MediaWiki can be configured to add default cloudinit configuration, scripts, and upstarts to instances.
Lastly, there is Gerrit.
Gerrit is a code review tool for git that manages the git repositories and can also handle things like branching and merges. All wiki users will have the ability to branch, and propose changes for merge. Two approvals will be required for merges. The first approval will come from the community. The second approval will come from ops. Ops will have the ability to merge. This is an example of a place we can increase privileges for non-staff ops. Over time we can give out the ability for non-staff ops to do final approvals and merges in the test/dev branch.
We can use help with this! There are a number of things we'd like to accomplish with nova, the OpenStackManager extension, and with our test/dev architecture that will take a lot of work. If you'd like to work with us, we are hiring. If you would like to volunteer we are very welcoming. You don't need to be an expert to volunteer. If you are looking to learn more about openstack, and want to help, we are more than willing to do mentorship on the project, as long as you can help us get the work done!
Though my efforts are Wikimedia oriented, I also took consideration of how I could build this to benefit outside organizations. Every organization I've worked at has had contention between operations and developers when it comes to the level of access that is given to developers. Generally the developers want full access, but don't follow ops procedures closely enough for the ops team's tastes. This architecture focuses on giving developers a high level of access while forcing standardized procedures where they are necessary. In the cluster clone they do not get root access, but can make changes through puppet. In their projects they get full access, but must standardize their builds before they can be deployed. I'd like our architecture to be a reference to vet this idea.
Another thing I often see is architecture documentation that is out of date. This architecture focuses on solving this problem as well, as the documentation is mostly handled automatically. Also, this documentation can be used in a structured way by the use of queries on the structured data, allowing the documentation to also be used for scripts, or data calls.
Please don't hesitate to contact me. I'm very active on IRC, and would love to talk to you.