The document discusses using Puppet to manage OpenStack deployments. It provides an overview of OpenStack, describes how Puppet can be used to deploy and configure OpenStack services and components, and highlights advantages of Puppet such as its resource abstraction layer, ordering, composability, and active community. It also notes challenges of orchestration, high availability, testing, and keeping Puppet code up to date with new OpenStack releases.
The document summarizes the author's experience working on the OpenStack Compute project. It describes how the project started small with a few developers successfully launching VMs in a weekend. It then grew rapidly with over 70 active contributors and 100 companies involved within 1.25 years. Key features like ISCSI support and high availability networking were able to be prototyped and added to production quickly thanks to OpenStack's agile development process and Python codebase. The author concludes that OpenStack Compute was the best project to work on due to its agile and dynamic nature, high-profile technology, and active community.
This document provides an introduction to orchestrating software deployments with Kubernetes. It discusses common challenges like deploying code to the cloud, horizontal scaling, rollouts and rollbacks that Kubernetes addresses. The basics of Kubernetes components like pods, deployments and ingress are explained. It also gives an example of creating a Kubernetes deployment and lists some important Kubernetes commands.
Cyberinfrastructure and Applications Overview: Howard University June22marpierc
1) Cyberinfrastructure refers to the combination of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people that enable knowledge discovery through integrated multi-scale simulations and analyses.
2) Cloud computing, multicore processors, and Web 2.0 tools are changing the landscape of cyberinfrastructure by providing new approaches to distributed computing and data sharing that emphasize usability, collaboration, and accessibility.
3) Scientific applications are increasingly data-intensive, requiring high-performance computing resources to analyze large datasets from sources like gene sequencers, telescopes, sensors, and web crawlers.
Puppet Camp Melbourne 2014: Node Collaboration with PuppetDBPuppet
The document discusses how PuppetDB can help with infrastructure provisioning and configuration challenges by providing a centralized database for Puppet data. PuppetDB stores facts, catalogs, and other data produced by Puppet runs. This data can be queried using the PuppetDB API or modules like PuppetDBquery to dynamically configure systems based on the states of other nodes and share information between nodes. Examples provided include using PuppetDB to populate files, load balance systems, and allow services to discover each other.
Managing your own PostgreSQL servers is sometimes a burden your business does not want. In this talk we will provide an overview of some of the public cloud offerings available for hosted PostgreSQL and discuss a number of strategies for migrating your databases with a minimum of downtime.
This document provides an overview of the Open Grid Computing Environments (OGCE) project, including portals, services, workflows, gadgets, and tags they develop. It discusses how OGCE software is used in science gateways and contributes code back to these projects. It also summarizes upcoming and existing OGCE services, strategies for adopting web 2.0 technologies, examples of OGCE gadgets and integration with open social containers, and a plan to integrate these components for demonstration at SC09.
This document summarizes the Open Grid Computing Environments (OGCE) project. It describes OGCE software tools like the Gadget Container, XBaya workflow composer, and GFAC application wrapper. It focuses on providing these tools to enable running science applications on grids and clouds. The tools can be used individually or together. OGCE outsources security and data services to providers like Globus, Condor, and iRods. It supports workflows like GridChem, UltraScan, and bioinformatics pipelines. The software is open source and available via anonymous SVN checkout.
The document summarizes the author's experience working on the OpenStack Compute project. It describes how the project started small with a few developers successfully launching VMs in a weekend. It then grew rapidly with over 70 active contributors and 100 companies involved within 1.25 years. Key features like ISCSI support and high availability networking were able to be prototyped and added to production quickly thanks to OpenStack's agile development process and Python codebase. The author concludes that OpenStack Compute was the best project to work on due to its agile and dynamic nature, high-profile technology, and active community.
This document provides an introduction to orchestrating software deployments with Kubernetes. It discusses common challenges like deploying code to the cloud, horizontal scaling, rollouts and rollbacks that Kubernetes addresses. The basics of Kubernetes components like pods, deployments and ingress are explained. It also gives an example of creating a Kubernetes deployment and lists some important Kubernetes commands.
Cyberinfrastructure and Applications Overview: Howard University June22marpierc
1) Cyberinfrastructure refers to the combination of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people that enable knowledge discovery through integrated multi-scale simulations and analyses.
2) Cloud computing, multicore processors, and Web 2.0 tools are changing the landscape of cyberinfrastructure by providing new approaches to distributed computing and data sharing that emphasize usability, collaboration, and accessibility.
3) Scientific applications are increasingly data-intensive, requiring high-performance computing resources to analyze large datasets from sources like gene sequencers, telescopes, sensors, and web crawlers.
Puppet Camp Melbourne 2014: Node Collaboration with PuppetDBPuppet
The document discusses how PuppetDB can help with infrastructure provisioning and configuration challenges by providing a centralized database for Puppet data. PuppetDB stores facts, catalogs, and other data produced by Puppet runs. This data can be queried using the PuppetDB API or modules like PuppetDBquery to dynamically configure systems based on the states of other nodes and share information between nodes. Examples provided include using PuppetDB to populate files, load balance systems, and allow services to discover each other.
Managing your own PostgreSQL servers is sometimes a burden your business does not want. In this talk we will provide an overview of some of the public cloud offerings available for hosted PostgreSQL and discuss a number of strategies for migrating your databases with a minimum of downtime.
This document provides an overview of the Open Grid Computing Environments (OGCE) project, including portals, services, workflows, gadgets, and tags they develop. It discusses how OGCE software is used in science gateways and contributes code back to these projects. It also summarizes upcoming and existing OGCE services, strategies for adopting web 2.0 technologies, examples of OGCE gadgets and integration with open social containers, and a plan to integrate these components for demonstration at SC09.
This document summarizes the Open Grid Computing Environments (OGCE) project. It describes OGCE software tools like the Gadget Container, XBaya workflow composer, and GFAC application wrapper. It focuses on providing these tools to enable running science applications on grids and clouds. The tools can be used individually or together. OGCE outsources security and data services to providers like Globus, Condor, and iRods. It supports workflows like GridChem, UltraScan, and bioinformatics pipelines. The software is open source and available via anonymous SVN checkout.
The document provides an overview of the Open Grid Computing Environments (OGCE) project, which develops and packages software for science gateways and resources. Key components discussed include the OGCE portal for building grid portals, Axis services for resource discovery and prediction, a workflow suite, and JavaScript and tag libraries. The document describes downloading and installing the OGCE software, which can be done with a single command, and discusses some of the portlets, services, and components included in the OGCE toolkit.
Apache Toree provides the interactive notebook for Spark/Scala. Toree is a IPython/Jupyter kernel. It lets you mix Spark/Scala code with markdown, execute the notebook, and publish it on the web.
Asim will talk about how to install and get started with Apache Toree, how to use it to develop Spark applications interactively in notebooks, and how to publish your notebooks.
Real Time Graph Computations in Storm, Neo4J, Python - PyCon India 2013Sonal Raj
This talk briefly outlines the Storm framework and Neo4J graph database, and how to compositely use them to perform computations on complex graphs in Python using the Petrel and Py2neo packages. This talk was given at PyCon India 2013.
This document provides an overview of Apache Kafka and Storm for distributed stream processing. It describes Kafka's architecture as a distributed commit log and covers topics, producers, consumers and clients. For Storm, it outlines the architecture including spouts, bolts and groupings. The document then provides guidance on coding topologies with spouts and bolts in Java and deploying locally or on a cluster.
Do you want to know how to upgrade the elastic stack on v7.7? Join Melvyn from the Education team and Camilo Sierra from Support to get more details about the upgrade order, strategies and common issues
Nebula provides open source Gradle plugins to help with tasks like testing, publishing, and dependency management. The document describes Nebula's plugin infrastructure and several plugins including nebula-test, nebula-core, nebula-project, gradle-nebula-oss-project, gradle-ospackage, gradle-dependency-lock, and gradle-publishing. It also mentions that Netflix is hiring build tools and full stack engineers.
Hw09 Building Data Intensive Apps A Closer Look At Trending Topics.OrgCloudera, Inc.
The document describes TrendingTopics.org, a website that detects trending topics using Wikipedia page view data analyzed with Hadoop on Amazon EC2. It loads over 1TB of Wikipedia page view logs into Hadoop on EC2, uses Hive and Python to perform daily timelines and trend detection, and hosts the results on a Ruby on Rails front end also running on EC2. All the code is open source and hosted on Github.
A talk at the Molecular Informatics Open Source Meeting (MIOSS) at the European Bioinformatics Institute (EMBL-EBI) in Hinxton, Cambridge, United Kingdon
This document discusses how OpenStack can be used to meet the needs of a smart grid system. It provides step-by-step instructions for installing and configuring OpenStack identity, image, compute, storage, block storage, and networking services. It then discusses how business applications like enterprise collaboration, ERP, CRM can utilize cloud computing. Finally, it discusses how OpenStack can support internet of things and smart grid applications like meter monitoring, feeder monitoring, and power quality monitoring.
Learn how Autodesk broke the 300,000 issues barrier without impacting performance, keeping excellent uptime, with more than 3000 registered users and average of 1800 concurrent users. In this session you will discover the hardware architecture, system settings and other interesting data from Autodesk experience in the field.
Aligning Continuous Integration Deployment: Automated Validation of OpenStack...Atlassian
Ever think to yourself...how can my team automate the processes for my complex system? How does Continuous integration and Continuous Deployment fit in? In this talk by Teyo and Dan you will dive into world of automation using Puppet and OpenStack. Start off with brief overview of Puppet and OpenStack, then dive into examples of how you model complex deployments of OpenStack using Puppet.
Feedback about 5 years of Foreman Experience to manage different kinds of infrastructure. A story about Open Source. Given for the 7th Birthday of The Foreman.
oVirt UI Plugin Infrastructure and the oVirt-Foreman pluginOved Ourfali
This document discusses the oVirt UI plugin infrastructure and the oVirt-Foreman plugin. The oVirt UI plugin infrastructure allows extending or customizing the oVirt Engine Admin Portal functionality by adding UI components. The oVirt-Foreman plugin integrates Foreman data into the oVirt Admin Portal using this infrastructure, displaying Foreman details and graphs for virtual machines. Future work may include improved integration between oVirt and Foreman systems and management of plugins from within the Admin Portal.
This document appears to be a presentation on Agile methodologies and Kanban workflows. It discusses concepts like sprints, planning, executing work, reporting on work, using boards to visualize workflows, integrating feedback loops, minimizing work in progress, and adapting to changing priorities. The presentation emphasizes applying these principles broadly for any team or business.
The document summarizes an agenda for a Belgian Puppet Users Group meeting about MCollective. The agenda includes an overview of orchestration and MCollective, hands-on setup of MCollective, the MCollective command line tool, MCollective agents, and future plans. It also provides details about installing and configuring MCollective clients, servers, and the ActiveMQ middleware.
This document contains a presentation on writing flexible and scalable Puppet modules. The presentation discusses common issues that arise when modules are not designed to be flexible, such as unexpected changes in operating systems or use cases. It provides guidance on how to design modules to be more modular, parameterized, and able to handle unexpected changes over time through techniques like using Hiera for configuration, separating logic into submodules, and favoring composition over inheritance. The goal is to create modules that can be easily adapted and improved by both their original authors and other users.
This letter recommends Ms. Steffany Ramos for employment or education. The writer has known Ms. Ramos since 2012 when she was a student and excelled at producing, writing, directing, and editing a documentary film and TV pilot that won awards. Outside of class, the writer hired Ms. Ramos as a production assistant and found her to be talented, thoughtful, skilled, and reliable. The writer gives Ms. Ramos their highest recommendation and believes she will apply her commitment, dedication, and creative passion to further pursuits.
Pulp is an open source platform for managing repositories of content and distributing that content to client machines. It allows users to create and publish repositories by syncing content from remote sources, upload their own content, and push content out to any number of consumers. Pulp supports RPM and Puppet modules with plans to support additional content types like Python packages.
MySQL Cluster Scaling to a Billion QueriesBernd Ocklin
MySQL Cluster is a distributed database that provides extreme scalability, high availability, and real-time performance. It uses an auto-sharding and auto-replicating architecture to distribute data across multiple low-cost servers. Key benefits include scaling reads and writes, 99.999% availability through its shared-nothing design with no single point of failure, and real-time responsiveness. It supports both SQL and NoSQL interfaces to enable complex queries as well as high-performance key-value access.
The document provides an overview of the Open Grid Computing Environments (OGCE) project, which develops and packages software for science gateways and resources. Key components discussed include the OGCE portal for building grid portals, Axis services for resource discovery and prediction, a workflow suite, and JavaScript and tag libraries. The document describes downloading and installing the OGCE software, which can be done with a single command, and discusses some of the portlets, services, and components included in the OGCE toolkit.
Apache Toree provides the interactive notebook for Spark/Scala. Toree is a IPython/Jupyter kernel. It lets you mix Spark/Scala code with markdown, execute the notebook, and publish it on the web.
Asim will talk about how to install and get started with Apache Toree, how to use it to develop Spark applications interactively in notebooks, and how to publish your notebooks.
Real Time Graph Computations in Storm, Neo4J, Python - PyCon India 2013Sonal Raj
This talk briefly outlines the Storm framework and Neo4J graph database, and how to compositely use them to perform computations on complex graphs in Python using the Petrel and Py2neo packages. This talk was given at PyCon India 2013.
This document provides an overview of Apache Kafka and Storm for distributed stream processing. It describes Kafka's architecture as a distributed commit log and covers topics, producers, consumers and clients. For Storm, it outlines the architecture including spouts, bolts and groupings. The document then provides guidance on coding topologies with spouts and bolts in Java and deploying locally or on a cluster.
Do you want to know how to upgrade the elastic stack on v7.7? Join Melvyn from the Education team and Camilo Sierra from Support to get more details about the upgrade order, strategies and common issues
Nebula provides open source Gradle plugins to help with tasks like testing, publishing, and dependency management. The document describes Nebula's plugin infrastructure and several plugins including nebula-test, nebula-core, nebula-project, gradle-nebula-oss-project, gradle-ospackage, gradle-dependency-lock, and gradle-publishing. It also mentions that Netflix is hiring build tools and full stack engineers.
Hw09 Building Data Intensive Apps A Closer Look At Trending Topics.OrgCloudera, Inc.
The document describes TrendingTopics.org, a website that detects trending topics using Wikipedia page view data analyzed with Hadoop on Amazon EC2. It loads over 1TB of Wikipedia page view logs into Hadoop on EC2, uses Hive and Python to perform daily timelines and trend detection, and hosts the results on a Ruby on Rails front end also running on EC2. All the code is open source and hosted on Github.
A talk at the Molecular Informatics Open Source Meeting (MIOSS) at the European Bioinformatics Institute (EMBL-EBI) in Hinxton, Cambridge, United Kingdon
This document discusses how OpenStack can be used to meet the needs of a smart grid system. It provides step-by-step instructions for installing and configuring OpenStack identity, image, compute, storage, block storage, and networking services. It then discusses how business applications like enterprise collaboration, ERP, CRM can utilize cloud computing. Finally, it discusses how OpenStack can support internet of things and smart grid applications like meter monitoring, feeder monitoring, and power quality monitoring.
Learn how Autodesk broke the 300,000 issues barrier without impacting performance, keeping excellent uptime, with more than 3000 registered users and average of 1800 concurrent users. In this session you will discover the hardware architecture, system settings and other interesting data from Autodesk experience in the field.
Aligning Continuous Integration Deployment: Automated Validation of OpenStack...Atlassian
Ever think to yourself...how can my team automate the processes for my complex system? How does Continuous integration and Continuous Deployment fit in? In this talk by Teyo and Dan you will dive into world of automation using Puppet and OpenStack. Start off with brief overview of Puppet and OpenStack, then dive into examples of how you model complex deployments of OpenStack using Puppet.
Feedback about 5 years of Foreman Experience to manage different kinds of infrastructure. A story about Open Source. Given for the 7th Birthday of The Foreman.
oVirt UI Plugin Infrastructure and the oVirt-Foreman pluginOved Ourfali
This document discusses the oVirt UI plugin infrastructure and the oVirt-Foreman plugin. The oVirt UI plugin infrastructure allows extending or customizing the oVirt Engine Admin Portal functionality by adding UI components. The oVirt-Foreman plugin integrates Foreman data into the oVirt Admin Portal using this infrastructure, displaying Foreman details and graphs for virtual machines. Future work may include improved integration between oVirt and Foreman systems and management of plugins from within the Admin Portal.
This document appears to be a presentation on Agile methodologies and Kanban workflows. It discusses concepts like sprints, planning, executing work, reporting on work, using boards to visualize workflows, integrating feedback loops, minimizing work in progress, and adapting to changing priorities. The presentation emphasizes applying these principles broadly for any team or business.
The document summarizes an agenda for a Belgian Puppet Users Group meeting about MCollective. The agenda includes an overview of orchestration and MCollective, hands-on setup of MCollective, the MCollective command line tool, MCollective agents, and future plans. It also provides details about installing and configuring MCollective clients, servers, and the ActiveMQ middleware.
This document contains a presentation on writing flexible and scalable Puppet modules. The presentation discusses common issues that arise when modules are not designed to be flexible, such as unexpected changes in operating systems or use cases. It provides guidance on how to design modules to be more modular, parameterized, and able to handle unexpected changes over time through techniques like using Hiera for configuration, separating logic into submodules, and favoring composition over inheritance. The goal is to create modules that can be easily adapted and improved by both their original authors and other users.
This letter recommends Ms. Steffany Ramos for employment or education. The writer has known Ms. Ramos since 2012 when she was a student and excelled at producing, writing, directing, and editing a documentary film and TV pilot that won awards. Outside of class, the writer hired Ms. Ramos as a production assistant and found her to be talented, thoughtful, skilled, and reliable. The writer gives Ms. Ramos their highest recommendation and believes she will apply her commitment, dedication, and creative passion to further pursuits.
Pulp is an open source platform for managing repositories of content and distributing that content to client machines. It allows users to create and publish repositories by syncing content from remote sources, upload their own content, and push content out to any number of consumers. Pulp supports RPM and Puppet modules with plans to support additional content types like Python packages.
MySQL Cluster Scaling to a Billion QueriesBernd Ocklin
MySQL Cluster is a distributed database that provides extreme scalability, high availability, and real-time performance. It uses an auto-sharding and auto-replicating architecture to distribute data across multiple low-cost servers. Key benefits include scaling reads and writes, 99.999% availability through its shared-nothing design with no single point of failure, and real-time responsiveness. It supports both SQL and NoSQL interfaces to enable complex queries as well as high-performance key-value access.
Monitis: All-in-One Systems Monitoring from the CloudHovhannes Avoyan
Monitis Cloud is a 6-in-1 systems monitoring software as a service that provides: 1) external monitoring of websites, file servers, mail servers, VoIP, and databases, 2) server monitoring of CPU, memory, processes and storage on Linux, Windows, FreeBSD and Solaris, 3) network monitoring using SNMP, ping, HTTP, SSH and discovery, 4) transaction monitoring of multi-step applications and workflows, 5) cloud monitoring of Amazon EC2, S3 instances, automation and usage, and 6) web traffic monitoring of visitors, page views, keywords and referrers.
Cody Herriges is a Puppet engineer who used to travel the world and whose father-in-law misunderstood his job. He now normally uses Puppet and Razor to automate OpenStack provisioning based on hardware profiles rather than using Cobbler, and he discussed Puppet Labs' acquisition of Razor from EMC and Razor's future directions. He invited questions and provided links to follow along and get a discount on PuppetConf 2013.
You've heard about Continuous Integration and Continuous Deilvery but how do you get code from your machine to production in a rapid, repeatable manner? Let a build pipeline do the work for you! Sam Brown will walk through the how, the when and the why of the various aspects of a Contiuous Delivery build pipeline and how you can get started tomorrow implementing changes to realize build automation. This talk will start with an example pipeline and go into depth with each section detailing the pros and cons of different steps and why you should include them in your build process.
The document discusses Foreman, an open source tool that provides provisioning, configuration management, and reporting functions through a single interface. It can manage a server's full lifecycle from installation to updates and integrates with tools like Puppet, DNS, DHCP, and libvirt/RHEV. The document includes a demo of Foreman's inventory, node classifier, reporting, API, and user management features. It also provides information on Foreman's architecture, community, installation process, and some organizations currently using Foreman.
Farmacia Digital - Webinar post Infarma 2015Campus Sanofi
Presentación sobre el uso de herramientas digitales en la farmacia y los blogs en la botica. Webinar realizado en Campus Sanofi por los farmacéuticos Francisco Cobo e Irune Andraca.
This presentation provides an introduction to OpenStack Quantum, the network connectivity component of OpenStack. It discusses what Quantum is, why it was created, its high-level architecture, current project status, and some additional details. Quantum provides virtual networking and network connectivity as a service for OpenStack compute instances. It aims to address limitations of the earlier nova-network component and provide more flexible network configuration and advanced networking capabilities.
OpenStack Training | OpenStack Tutorial For Beginners | OpenStack Certificati...Edureka!
This Edureka "OpenStack Training" tutorial will help you understand all the basics of OpenStack. We have demonstrated the OpenStack Deployment at PayPal using Cinder which will familiarize you with the Real-life applications of OpenStack. Below are the topics covered in this tutorial:
1. What is OpenStack?
2. OpenStack Architecture
3. OpenStack Components
4. PayPal Case Study
5. PayPal OpenStack System
6. EBay Implementation Model
7. Cinder Deployment at PayPal
Developing on OpenStack Startup Edmontonserverascode
The title of the presentation might be a bit off. We gave about a 30 minute introduction to OpenStack, and then about a 30 min demo on installing the Ghost blogging platform using Chef in an OpenStack cloud.
This summary provides an overview of the key points from the OpenStack security document:
1. OpenStack is an open source cloud computing platform consisting of several interrelated components like Nova, Swift, Keystone, etc. Each component has its own REST API and is responsible for a certain functionality like compute, storage, identity, etc.
2. The document discusses various security aspects and pain points related to different OpenStack components like authentication tokens, message buses, REST APIs, volumes, and intrusion detection.
3. It also covers strategies for incident response, forensics, and reporting vulnerabilities in OpenStack. Maintaining chain of custody for evidence and providing forensic access to tenants are highlighted.
4. Finally, the
Openstack - An introduction/Installation - Presented at Dr Dobb's conference...Rahul Krishna Upadhyaya
Slide was presented at Dr. Dobb's Conference in Bangalore.
Talks about Openstack Introduction in general
Projects under Openstack.
Contributing to Openstack.
This was presented jointly by CB Ananth and Rahul at Dr. Dobb's Conference Bangalore on 12th Apr 2014.
1. GMO Internet operates multiple public cloud services using OpenStack including ConoHa public cloud and GMO AppsCloud.
2. They have a limited number of staff developing and operating OpenStack services across many clusters but must run a large number of OpenStack services.
3. They have upgraded their OpenStack installations over time from Diablo to Juno, expanding services from basic compute to block storage, object storage, load balancing, and more.
This presentation is a basic overview of the OpenStack Cloud. It was presented on September 23, 2015 in Orlando Florida at the Downtown UCF Incubation Office. The session provides a hi level overview of the OpenStack and a list of training resources to get up to speed on OpenStack.
OpenStack Tutorial For Beginners | OpenStack Tutorial | OpenStack Training | ...Edureka!
This Edureka 'OpenStack Tutorial' explains all the OpenStack services - Compute, Storage, Networking etc. This will also help you to understand the architecture of a OpenStack cloud infrastructure and how all the services communicate with one another.
This document provides an overview of OpenStack, including:
1. It describes how OpenStack started from an email between engineers at NASA and Rackspace discussing building an open source cloud computing platform.
2. It defines OpenStack as an open source software platform for building private and public clouds that manages compute, storage, and networking resources.
3. It outlines the main OpenStack components - Dashboard, Keystone, Glance, Neutron, Nova, Cinder, Swift - and their functions in automating and orchestrating IT resources through a loosely coupled architecture.
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Daniel Krook
Presentation at the OpenStack Summit in Tokyo, Japan on October 29, 2015.
http://sched.co/49vI
This talk will cover the pros and cons of four different OpenStack deployment mechanisms. Puppet, Chef, Ansible, and Salt for OpenStack all claim to make it much easier to configure and maintain hundreds of OpenStack deployment resources. With the advent of large-scale, highly available OpenStack deployments spread across multiple global regions, the choice of which deployment methodology to use has become more and more relevant.
Beyond the initial day-one deployment, when it comes to the day-two and beyond questions of updating and upgrading existing OpenStack deployments, it becomes all the more important choose the right tool.
Come join the Bluebox and IBM team to discuss the pros and cons of these approaches. We look at each of these four tools in depth, explore their design and function, and determine which scores higher than others to address your particular deployment needs.
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
Paul Czarkowski - Cloud Engineer at Blue Box, an IBM company
Daniel Krook - Senior Software Engineer, Cloud and Open Source Technologies, IBM
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. ...Animesh Singh
Chef, Puppet, Ansible, and Salt are popular configuration management tools for deploying and managing OpenStack. Each tool has its own strengths and weaknesses. Chef focuses on infrastructure automation and uses a Ruby DSL. Puppet uses a custom DSL and is focused on compliance. Ansible emphasizes orchestration and uses YAML playbooks. Salt uses a Python-based interface and focuses on remote execution and data collection at scale. All four tools provide options for deploying and managing OpenStack, with varying levels of documentation and community support.
Slides from our Q3 meetup held in Montreal on September 27th 2017 at the Cloud.ca Center.
Video recording can be seen at: https://www.youtube.com/watch?v=_1btwHW39ms&list=PLSsQodeQD6LPyqrvvczcC5mkOOnPt469o
2024 Feb AI Meetup NYC GenAI_LLMs_ML_Data Codeless Generative AI PipelinesTimothy Spann
2024 Feb AI Meetup NYC GenAI_LLMs_ML_Data Codeless Generative AI Pipelines
https://www.aicamp.ai/event/eventdetails/W2024022214
apache nifi
llm
generative ai
gen ai
ml
dl
machine learning
apache kafka
apache flink
postgresql
python
AI Meetup (NYC): GenAI, LLMs, ML and Data
Feb 22, 05:30 PM EST
Welcome to the monthly in-person AI meetup in New York City, in collaboration with Microsoft. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers
Agenda:
* 5:30pm~6:00pm: Checkin, Food/drink and networking
* 6:00pm~6:10pm: Welcome/community update
* 6:10pm~8:30pm: Tech talks
* 8:30pm: Q&A, Open discussion
Tech Talk: Searching and Reasoning Over Multimedia Data with Vector Databases and LMMs
Speaker: Zain Hasan (Weaviate LinkedIn)
Abstract: In this talk, Zain Hasan will discuss how we can use open-source multimodal embedding models in conjunction with large generative multimodal models that can that can see, hear, read, and feel data(!), to perform cross-modal search(searching audio with images, videos with text etc.) and multimodal retrieval augmented generation (MM-RAG) at the billion-object scale with the help of open source vector databases. I will also demonstrate, with live code demos, how being able to perform this cross-modal retrieval in real-time can enables users to use LLMs that can reason over their enterprise multimodal data. This talk will revolve around how we can scale the usage of multimodal embedding and generative models in production.
Tech Talk: Codeless Generative AI Pipelines
Speaker: Timothy Spann (Cloudera LinkedIn)
Abstract: Join us for an insightful talk on leveraging the power of real-time streaming tools, specifically Apache NiFi, to revolutionize GenAI data engineering. In this session, we’ll explore how the integration of Apache NiFi can automate the entire process of prompt building, making it a seamless and efficient task.
Speakers/Topics:
Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics
Sponsors:
We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food/drink, or cash sponsorship. Sponsors will have the chance to speak at the meetups, receive prominent recognition, and gain exposure to our extensive membership base of 20,000+ local or 300K+ developers worldwide.
Venue:
Microsoft NYC - Times Square, 11 Times Square, New York, NY 10036
Room Name: Central Park West 6501
Community on Slack/Discord
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join the #newyork channel) | Join Discord
The best way to understand the cloud, is to have one of your own to kick around, poke, break, fix, and see what it looks like when it's running. In OpenStack we've got a whole project called Devstack which is designed to quickly bootstrap the latest git versions of all the OpenStack components and create an experimentation friendly OpenStack environment. This talk will introduce Devstack, how to get a running OpenStack with it, and how one might begin making changes and seeing them in action. We'll explore a few of the major OpenStack services, see what's going on, all with the intent to explain what OpenStack is by seeing not only the interface, but the internals at work.
Presented at LinuxCon NA 2014
The document discusses OpenStack, an open source cloud computing platform. It provides an overview of OpenStack and describes several of its core components and services, including Nova (compute), Neutron (networking), Keystone (identity), Glance (imaging), and Horizon (dashboard). It discusses the architecture and modular design of OpenStack, and how the different components work together and interact through APIs. It also provides information on various OpenStack releases and the additional services included in each release over time.
A comprehensive review of OpenStack then and now, each project's architecture, and hard data on why the race for open cloud is over. (First edition delivered April 2013 at OpenStack Summit. This version is from SPDEcon on June 10, 2013.)
This session will examine the many options the data scientist has for running Spark clusters in public and private clouds. We will discuss various environments employing AWS, Mesos, containers, docker, and BlueData EPIC technologies and the benefits and challenges of each.
Speakers:
Tom Phelan, Co-founder and Chief Architect - BlueData Inc. Tom has spent the last 25 years as a senior architect, developer, and team lead in the computer software industry in Silicon Valley. Prior to co-founding BlueData, Tom spent 10 years at VMware as a senior architect and team lead in the core R&D Storage and Availability group. Most recently, Tom led one of the key projects – vFlash, focusing on integration of server-based Flash into the vSphere core hypervisor. Prior to VMware, Tom was part of the early team at Silicon Graphics that developed XFS, one of the most successful open source file systems. Earlier in his career, he was a key member of the Stratus team that ported the Unix operating system to their highly available computing platform. Tom received his Computer Science degree from the University of California, Berkeley.
Open Source Lambda Architecture for deep learningPatrick Nicolas
This presentation describes the various layers and open source components that can be used to design and implement a lambda architecture enabled to support batch processing for model training and streaming for prediction
Teaching Apache Spark Clusters to Manage Their Workers Elastically: Spark Sum...Spark Summit
Devops engineers have applied a great deal of creativity and energy to invent tools that automate infrastructure management, in the service of deploying capable and functional applications. For data-driven applications running on Apache Spark, the details of instantiating and managing the backing Spark cluster can be a distraction from focusing on the application logic. In the spirit of devops, automating Spark cluster management tasks allows engineers to focus their attention on application code that provides value to end-users.
Using Openshift Origin as a laboratory, we implemented a platform where Apache Spark applications create their own clusters and then dynamically manage their own scale via host-platform APIs. This makes it possible to launch a fully elastic Spark application with little more than the click of a button.
We will present a live demo of turn-key deployment for elastic Apache Spark applications, and share what we’ve learned about developing Spark applications that manage their own resources dynamically with platform APIs.
The audience for this talk will be anyone looking for ways to streamline their Apache Spark cluster management, reduce the workload for Spark application deployment, or create self-scaling elastic applications. Attendees can expect to learn about leveraging APIs in the Kubernetes ecosystem that enable application deployments to manipulate their own scale elastically.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
2. About Me
● Background in system administration and
puppet at Portland State University
● Puppet Labs module engineer
● StackForge puppet core contributor
Colleen Murphy ⚙ freenode/crinkle ⚙ twitter/@pdx_krinkle ⚙ github/cmurphy
3. Overview
● Intro to OpenStack
● Leveraging puppet for OpenStack
● What makes puppet awesome at this
● Challenges we’re facing as puppet users
4. What is OpenStack?
OpenStack is an open source cloud computing
platform.
● Public clouds - Amazon competitors
● Private clouds - Internal infrastructure,
developer support, QA support
1. This talk is about openstack and how we’re trying to solve some of the problems of installing and managing an openstack deployment using puppet.
2. I was a student at Portland State University and learned puppet and system administration as a student worker there. I was hired at Puppet Labs a few months ago, where my focus has been on contributing to the community-maintained OpenStack modules on StackForge.
3. I want to very briefly give a baseline overview of what openstack is and how its architecture looks, just enough so I know we’re all on the same page, because next I want to talk about how the openstack modules are used to deploy this architecture. Then I want to highlight some of the features in puppet that make it really well suited for this, and then talk about some of the challenges we’re facing having chosen puppet as the tool for this.
4. OpenStack is an open source cloud computing platform. It has two main use cases: public clouds, where a cloud company productizes it, sticks some value-adds on it, and markets it as an Amazon competitor; and private clouds, where you run a virtualization platform to support internal infrastructure, run your application on it, or make it available for use by developers, QA engineers, researchers, students, etc.
5. An OpenStack deployment starts out with two base components: a relational database server, usually MySQL or one of the forks, and RabbitMQ as the messaging service. Every OpenStack service has its own database and every service connects to the messaging queue.
6. On top of that, the first OpenStack service that needs to be set up is Keystone, the identity service. This provides a place to register and look up other services and endpoints, plus service users, authentication, and access control.
7. Then we have everything else, with varying degrees of dispensability and from which operators can pick and choose. These services may be grouped together across a set of nodes, and in some cases the service is split up into an API server and an agent, and sometimes they are replicated and load balanced in an HA setup. There isn’t a single “right” configuration.
8. We can consider a couple of these as core services that basically any OpenStack cloud will have. Nova is the compute service. Neutron provides networking, though some of that functionality can be found in Nova. Cinder is the block storage service, so any long-term block-type data will need Cinder.
9. Where puppet comes into this is we have puppet modules for installing and configuring almost all of the openstack services and then we use the puppetlabs modules for rabbitmq and mysql. Within the modules, we have different classes for managing different parts of the service, so the api services and the actual worker services are separated but can share parameters.
10. In addition to that, we have two special modules: openstacklib is for common code that the modules share, because they’re surprisingly similar, so things like setting up the databases, database users and rabbitmq users, and common code in the types and providers, can go in openstacklib; openstack_extras is for the extra stuff around the modules that go with setting up a full openstack deployment, where so far we have code to manage package repositories and we’re in the process of adding pacemaker support for HA deployments.
11. The puppet modules live on StackForge, which is the home for OpenStack-related but not OpenStack-core projects. Being on stackforge means we get to take advantage of the openstack CI system, review process, and source hosting. We have numerous dedicated maintainers, primarily operators trying to use these in production or developers trying to integrate the modules into their own third-party tooling. The modules are not owned by puppet labs or any particular company.
12. To explain the third-party tooling a bit, there are three main tools that specifically integrate the StackForge puppet modules. Fuel is an open source deployment framework from Mirantis that uses the puppet modules as a primary component. RDO and PackStack are from RedHat; PackStack is a utility built by RedHat on top of the puppet modules and RDO is a packaged set of installation tools, including PackStack. TripleO is the official OpenStack deployment tool, and now has experimental integration with the puppet modules, with intentions to integrate with ansible and chef as well. Other tools are also able to integrate with puppet, such as juju and foreman.
13. Puppet is consistently voted the most popular deployment tool for OpenStack
Sources: http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
http://www.slideshare.net/ryan-lane/openstack-atlanta-user-survey
14. What are the features of puppet and the puppet ecosystem that make it great at doing openstack?
15. The resource abstraction layer helps us manage traditional resources like packages, files, and services in an easy, implementation-agnostic way.
16. Where it really shines here, though, is we now also have the ability to invent totally new resources, and in our case we’re writing a lot of types and providers to manage openstack resources. So we’re able to manage things like keystone resources, service registration, network subnets, glance images, all in puppet. These are all managed via REST APIs that might not be on the same node, so that means we're doing things like authenticating against an endpoint in the provider, and writing timeouts and retries within the provider. We’re able to express a lot semi-complicated things in this way. Another interesting one is these resources for managing individual parameters in config files, nova_config is an example here. It’s kind of like a stripped-down version of the ini_setting resource specific to each service, with the idea that there are different pieces of a service managing the same config file, so rather than making complex templates or using concat fragments, we're treating each parameter as its own resource.
17. Within a single node, puppet is fantastic at setting up resources in the right order. This is important when we have a lot of things going on in a node, all depending on each other and talking to each other.
18. Because puppet promotes this modular approach to config management, we get this ability to build cool things out of chunks. We can reuse other modules to save ourselves a bunch of work, and we make the modules logically separate and as configurable as possible so they can be used in different types of deployments. We’re not creating a dot slash installopenstack.sh, we’re developing building blocks and using other building blocks to help compose OpenStack.
19. Part of why puppet is so popular in openstack is because it’s so popular in general, and the active community in puppet is a big factor in that.
20. We love hiera in OpenStack for the same reason we love hiera for everything else: we get to separate secrets and mundane data from the code, so that ...
21. ... you end up with cleaner manifests in the end.
22. What challenges are we facing having chosen puppet as our deployment technology?
23. There is currently no native inter-node orchestration in puppet, so when we say we deploy OpenStack using puppet, that's not really the whole picture. We need something else for this. People are using MCollective, Ansible, and other tools to accomplish this, but there isn't a standard. Puppet Labs has announced its plans to try to handle this natively, but for people deploying OpenStack now, it's an issue.
24. High Availability support is an area of active development, and we have some initial groundwork set up in openstack_extras to use pacemaker for this. Some of the challenges here, though, are 1) the definition of HA is different for different services - a storage service has different HA needs than an API service, for example, and 2) HA is, by its nature, a multi-node issue, so we still face challenges with orchestration.
25. We use rspec and rspec-puppet for writing unit tests for all the modules, and that helps a lot. We don’t yet have functional testing. We want to be using beaker-rspec for this since it’s the recommended way to test modules. There’s two problems here, one is a technical problem that’s largely been solved and one is more of a social problem. The first problem is that the modules are using the openstack-infra CI to run tests on, which is kind of a foreign environment to beaker. Beaker wants to manage spinning up the vms itself, and the openstack-infra nodepool and zuul and jenkins are already in charge of that. Spencer from the infra team has helped us a lot in hacking a way to get beaker and nodepool to cooperate with each other, but we still don’t really know how this is going to work for multinode nodesets. So we’ve got the groundwork set up for some initial functional testing, and we’re in the process of developing that, but now we have the challenge that these modules are already written, so adding in tests at this point is going to be a tedious challenge that no one has really picked up yet.
26. These projects are very fast-paced, things change all the time. Sometimes its breaking changes, sometimes its new features we want to take advantage of. Luckily we have a lot of skilled OpenStack operators who are always on top of what's breaking and changing in their infrastructure, so we are good at reactively fixing things, but it's an ongoing effort.
27. Even though OpenStack is a challenge, I really enjoy working on this because OpenStack is causing us to find really creative solutions to interesting problems. The challenge is fun, and working with a smart community toward different but overlapping goals is kind of awesome, and that’s why I wanted to give this talk and share where we’re at with this project.