Microservices with Node & Docker allow for building and deploying applications as independent services that can scale independently. Docker provides lightweight isolated environments for running services, while Node is well-suited as a platform due to its asynchronous and non-blocking I/O model and ease of building scalable network services. Together, Docker and Node enable a microservices architecture with improved developer productivity, deployment flexibility, and scalability compared to traditional monolithic applications.
The introduction covers the following
1. What are Microservices and why should be use this paradigm?
2. 12 factor apps and how Microservices make it easier to create them
3. Characteristics of Microservices
Note: Please download the slides to view animations.
Nodeconf Barcelona 2015 presentation exploring several ways of building microservices in an asynchronous way. Presented the concept of a broker as an alternative to a multiple point-to-point architecture.
A basic introduction to Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
The introduction covers the following
1. What are Microservices and why should be use this paradigm?
2. 12 factor apps and how Microservices make it easier to create them
3. Characteristics of Microservices
Note: Please download the slides to view animations.
Nodeconf Barcelona 2015 presentation exploring several ways of building microservices in an asynchronous way. Presented the concept of a broker as an alternative to a multiple point-to-point architecture.
A basic introduction to Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Today, the development and operations landscape has shifted to a more collaborative model merging the two (DevOps). Developers need to know much more about the operational components of their software - especially around network programming, services development, and continuous deployment. Likewise, the developer's IT counterpart needs to know much more about development - especially around infrastructure automation (Chef/Puppet), automated testing, and continuous deployment.
We are on the cusp of a new era of application development software: instead of bolting on operations as an after-thought to the software development process, Kubernetes promises to bring development and operations together by design.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
An inroduction to Terraform, a tool that helps you deploy and change your infrastructure as code. Given at Rencontres Mondiales du Logiciel libre (RMLL) 2017
While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
Join us to learn the concepts and terminology of Kubernetes such as Nodes, Labels, Pods, Replication Controllers, Services. After taking a closer look at the Kubernetes master and the nodes, we will walk you through the process of building, deploying, and scaling microservices applications. Each attendee gets $100 credit to start using Google Container Engine. The source code is available at https://github.com/janakiramm/kubernetes-101
This presentation about Docker will help you learn what Docker and Docker compose is, benefits of Docker compose, differences between Docker compose and Docker swarm, basic commands of docker compose and finally, a demo on docker compose. Docker is a tool which runs containers, whereas Docker Compose is used for running multiple containers as a single service. With compose, containers run in isolation (but they interact with each other). After watching this video, you will able to create a YAML file of docker compose and run multiple containers at a time. Now, let us get started and understand how does a Docker compose work.
Below are the topics covered in this Docker compose presentation:
1. What is Docker?
2. What is a Docker Compose?
3. Benefits of Docker compose
4. Docker Compose vs Docker Swarm
5. Basic commands of Docker
6. Demo
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands on and interactive approach. The Devops training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
DevOps Transformation: Learnings and Best PracticesQBurst
The presentation delves into the best practices and approach for DevOps adoption. Understand key aspects of DevOps and how it brings about speed and efficiency in the software development lifecycle
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
In this webinar, we review the benefits of deploying a microservices architecture with Cassandra as your backbone in order to ensure your applications become incredibly reliable. We discuss in detail:
- How to create microservices in Node.js with ExpressJs and Seneca
- Tuning the Node.js driver for Cassandra: error handling, load balancing and degrees of parallelism
- Additional best practices to ensure your systems are highly performant and available
The sample service is available on GitHub: https://github.com/jorgebay/killr-service
Microservices with Node.js and RabbitMQPaulius Uza
Microservices with Node.js and RabbitMQ. Case study of real world infrastructure scalability using data-layer-rabbitmq library.
https://github.com/bdswiss/data-layer-rabbitmq
Presented at Node.js Athens Meetup, Dec 17 2015
Today, the development and operations landscape has shifted to a more collaborative model merging the two (DevOps). Developers need to know much more about the operational components of their software - especially around network programming, services development, and continuous deployment. Likewise, the developer's IT counterpart needs to know much more about development - especially around infrastructure automation (Chef/Puppet), automated testing, and continuous deployment.
We are on the cusp of a new era of application development software: instead of bolting on operations as an after-thought to the software development process, Kubernetes promises to bring development and operations together by design.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
An inroduction to Terraform, a tool that helps you deploy and change your infrastructure as code. Given at Rencontres Mondiales du Logiciel libre (RMLL) 2017
While many organizations have started to automate their software develop processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
Join us to learn the concepts and terminology of Kubernetes such as Nodes, Labels, Pods, Replication Controllers, Services. After taking a closer look at the Kubernetes master and the nodes, we will walk you through the process of building, deploying, and scaling microservices applications. Each attendee gets $100 credit to start using Google Container Engine. The source code is available at https://github.com/janakiramm/kubernetes-101
This presentation about Docker will help you learn what Docker and Docker compose is, benefits of Docker compose, differences between Docker compose and Docker swarm, basic commands of docker compose and finally, a demo on docker compose. Docker is a tool which runs containers, whereas Docker Compose is used for running multiple containers as a single service. With compose, containers run in isolation (but they interact with each other). After watching this video, you will able to create a YAML file of docker compose and run multiple containers at a time. Now, let us get started and understand how does a Docker compose work.
Below are the topics covered in this Docker compose presentation:
1. What is Docker?
2. What is a Docker Compose?
3. Benefits of Docker compose
4. Docker Compose vs Docker Swarm
5. Basic commands of Docker
6. Demo
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands on and interactive approach. The Devops training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
DevOps Transformation: Learnings and Best PracticesQBurst
The presentation delves into the best practices and approach for DevOps adoption. Understand key aspects of DevOps and how it brings about speed and efficiency in the software development lifecycle
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
In this webinar, we review the benefits of deploying a microservices architecture with Cassandra as your backbone in order to ensure your applications become incredibly reliable. We discuss in detail:
- How to create microservices in Node.js with ExpressJs and Seneca
- Tuning the Node.js driver for Cassandra: error handling, load balancing and degrees of parallelism
- Additional best practices to ensure your systems are highly performant and available
The sample service is available on GitHub: https://github.com/jorgebay/killr-service
Microservices with Node.js and RabbitMQPaulius Uza
Microservices with Node.js and RabbitMQ. Case study of real world infrastructure scalability using data-layer-rabbitmq library.
https://github.com/bdswiss/data-layer-rabbitmq
Presented at Node.js Athens Meetup, Dec 17 2015
Building Scalable Micro-services with NodejsMichal Juhas
A meetup hosted by HotelQuickly Engineers, this time about scalable micro-services with Nodejs. See more at:
www.meetup.com/BKK-Developers-and-Tech-lovers/events/222069670/
Building a Platform-as-a-Service with Docker and Node.jsKevin Swiber
Docker describes itself as "an open source project to pack, ship and run any application as a lightweight container." Learn how to use Docker to create a simple Platform-as-a-Service for packaging and deploying your Node.js applications! Introducing Borealis.
Data Modeling for Microservices with Cassandra and SparkJeffrey Carpenter
Strata NYC 2016. Jeff Carpenter describes how data modeling can be a key enabler of microservice architectures for transactional and analytics systems, including service identification, schema design, and event streaming.
The dream is alive! Running Linux containers on an illumos kernelbcantrill
Presentation for #illumos day at #surgecon, 2014. Video can be found at https://www.youtube.com/watch?v=TrfD3pC0VSs Source code is at https://github.com/joyent/illumos-joyent
I have spent some time working on a project, and built 8 micro services and 2 applications, and planned to carve out a few more. Deployment was carried out in a farm of 25 servers in production with a single click in less than 3 minutes.
This presentation is about the experiences with building a micro service based architecture - the good, the bad and the ugly.
- What are micro services?
- When/Why/How micro services?
- Why NOT micro services?
- Managing Continuous Integration and Continuous Delivery with micro services
- A few design principles that we followed and that worked for us
Developing and Deploying Java applications on the Amazon Elastic Compute Clou...Chris Richardson
Traditionally, computer hardware was a scarce, expensive resource. Running performance tests often meant scavenging for machines around the office. Today, however, things are different. With Amazon's EC2, a cluster of servers is now just a web service call away. In this presentation you will learn about design and implementation of Cloud Tools, which is a Groovy-based framework for deploying and testing Java EE applications on EC2. This framework provides a simple (internal) DSL for configuring a cluster (database + web container + apache), deploying a web application, and running performance tests using JMeter. You will learn about capabilities of EC2 and how to use it for development and deployment. We describe how we use Amazon S3 to work around EC2's lack of a persistent file system and avoid time-consuming uploads of WAR files.
Getting Started with the Node.js LoopBack APi FrameworkJimmy Guerrero
These slides are from the May 22, 2015 webinar with Shubhra Kar where he gave an overview of the architecture and features of the Node.js LoopBack framework for building APIs.
Decomposing applications for scalability and deployability (devnexus 2013)Chris Richardson
Today, there are several trends that are forcing application architectures to evolve. Users expect a rich, interactive and dynamic user experience on a wide variety of clients including mobile devices. Applications must be highly scalable, highly available and run on cloud environments. Organizations often want to frequently roll out updates, even multiple times a day. Consequently, it’s no longer adequate to develop simple, monolithic web applications that serve up HTML to desktop browsers.
In this talk we describe the limitations of a monolithic architecture. You will learn how to use the scale cube to decompose your application into a set of narrowly focused, independently deployable back-end services and an HTML 5 client. We will also discuss the role of technologies such as NodeJS and AMQP brokers. You will learn how a modern PaaS such as Cloud Foundry simplifies the development and deployment of this style of application.
Microservice architectures have generated quite a bit of hype in recent months, and practitioners across our industry have vigorously debated the definition, purpose, and effectiveness of these architectures.
In this session, Matt Stine will cut through the Microservices hype and examine some very practical considerations:
• Not an End in Themselves: Microservices are really all about helping us achieve continuous delivery
• Systems over Services: Microservices are less about the services themselves and more about the systems we can assemble using them. Boilerplate patterns for configuration, integration, and fault tolerance are keys.
• Operationalized Architecture: Microservices aren’t a free lunch. You have to pay for them with strong DevOps sauce.
• It’s About the Data: Bounded contexts with API’s are great until you need to ask really big questions. How do we effectively wrangle all of the data at once?
Along the way, we’ll see how open source technology efforts such as Cloud Foundry, Spring Cloud, Netflix OSS, Spring XD, and Hadoop can help us with many of these considerations.
This presentation from the I Love APIs conference makes the case for why Node and Docker are great together for implementing Microservice architecture. It also provides an quick orientation for getting started with Docker Machine, Node, and Mongo with container linking and data volume containers.
[WSO2Con Asia 2018] Architecting for Container-native EnvironmentsWSO2
This slide deck explores architectural choices for making applications and integration services first class citizens in a container native environment.
Learn more: https://wso2.com/library/conference/2018/08/wso2con-asia-2018-architecting-for-container-native-environments/
Attack graph generation for micro services architectureAbdul Qadir
Cyber crime is an evolving issue for global enterprises and individuals. Cyber criminals (i.e., attackers) are focusing more on valuable assets and critical infrastructures in a networked system (e.g., enterprise systems and cyber physical systems), which potentially has a high socioeconomic impact in an event of an attack. Security mechanisms (e.g., firewalls) may enhance the security, but the overall in-depth security of the networked system cannot be estimated without a security analysis (e.g., cannot identify security flaws and potential threats). Moreover, attackers may explore an attack surface of the networked system to find vulnerabilities, and exploit them to penetrate through. Therefore, it is important to reduce and continuously change the attack surface based on a security analysis.
When remote command injection attacks succeed at the entry points of a cloud (servers exposed to the outside Internet), attackers targeting a specific asset in the cloud will pursue further exploration to find their targets. Attack targets, such as database servers, are often running on separate machines, forcing an extra step for a successful attack.
Agenda
1. The changing landscape of IT Infrastructure
2. Containers - An introduction
3. Container management systems
4. Kubernetes
5. Containers and DevOps
6. Future of Infrastructure Mgmt
About the talk
In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
[Srijan Wednesday Webinar] How to Run Stateless and Stateful Services on K8S ...Srijan Technologies
Speaker: Adheip Singh, Senior DevOps
Kubernetes has taken the tech world by storm. As an orchestration platform, it has eased the deployment & scaling of stateless applications. However, managing complex, in particular stateful applications, is a major pain.
The extensibility of Kubernetes has led to the development of a rapidly evolving ecosystem around K8S. And as an outcome of this extensibility, Kubernetes Operators were designed which eased packaging, deployment as well as management of a Kubernetes application.
In this webinar, we will talk about K8S, with a focus on extensibility and architectural aspects of K8S operators. Additionally, we will perform a live demo of our attempt at writing a Drupal Operator.
Key Takeaways
Understand the extensibility of Kubernetes
- Conceptually understand how to run stateless and stateful services on K8S
- Learn all about K8S operators
- Get a hands-on demo for writing a Drupal Operator
Who is this for
- DevOps
- Technical Architects
- Software Developers
- Anyone interested in learning how to run Drupal workloads on Kubernetes and extensibility of Kubernetes
View all our webinars at: https://www.srijan.net/webinar
The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small. For example, make the time gap small, make the personnel gap small & make the tools gap small. Learn more about how a Cloud vendor must provide a platform for 12-factor / Cloud Native development and deployment with identified anti-patterns.
Presentation created for Third and Final Year students of , The Department of Information Technology, Bharati Vidyapeeth (Deemed to be University) College of Engineering, Pune. Collage has invited myself for a training program on “Recent Trends in Information Technology”. I presented on topic of "Serverless Microservices". It is Level-100 Session.
How to build "AutoScale and AutoHeal" systems using DevOps practices by using modern technologies.
A complete build pipeline and the process of architecting a nearly unbreakable system were part of the presentation.
These slides were presented at 2018 DevOps conference in Singapore. http://claridenglobal.com/conference/devops-sg-2018/
Building Cloud-Native Applications with a Container-Native SQL Database in th...NuoDB
Agencies of all sizes are struggling to keep pace with rapidly changing mission needs and regulations. Their success is more dependent than ever on their ability to increase agility and take advantage of cloud and cloud-native architectures.
This webinar will cover how public sector agencies are working with Red Hat and NuoDB to:
Seamlessly deploy and manage applications in a modern architecture; Maintain the benefits of SQL and gain on-demand, horizontal scalability; Deploy a technology stack that facilitates efficiency and a DevOps structure.
Migrate to the Latest WSO2 Micro Integrator to Unlock All-new FeaturesWSO2
Learn from product developers about the benefits of using or migrating to WSO2 Micro Integrator 1.2.0, and what features it brings in to cater to both centralized and microservices-based deployments.
Watch the on-demand webinar here - https://wso2.com/library/webinars/migrate-to-the-latest-wso2-micro-integrator/
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
APIs have revolutionized how companies build new marketing channels, access new customers, and create ecosystems. Enabling all this requires the exposure of APIs to a broad range of partners and developers—and potential threats.
Learn more about the latest API security issues.
Magazine Luiza is a top retailer in Brazil that operates 800 stores and nine distribution centers.
It sets itself apart from rivals with its multi-channel sales platform and innovative digital strategies.
Do you want to scale your API program? Do you want to create new business opportunities with developers and partners? If so, monetization might be the right strategy for you.
Monetization is influencing how APIs are delivered. It provides the flexibility to generate different API consumption models for developers, and it opens opportunities to derive value from APIs, for developers and for partners.
Learn about:
- Monetization trends and best practices
- The industries that leverage API monetization today
- The future of monetization
Watch the live demo of Apigee's API platform to learn how to:
- easily configure and manage new APIs and enforce security with minimal impact to backend services
- create, manage and monetize API products
- extend API Services to increase flexibility and tailor to business requirements with JavaScript, Java, Python, and Node.js
- provide developers easy, yet secure access to explore, test, and deploy APIs
- use end-to-end visibility across the digital value chain to monitor, measure, and manage success
Ticketmaster, the leader in ticket sales and distribution, uses APIs to simplify event discovery and partnerships.
APIs and API management are key to the company realizing its mission to “bring moments of joy to fans everywhere.”
AccuWeather: Recasting API Experiences in a Developer-First WorldApigee | Google Cloud
Learn about the strategy behind AccuWeather’s decision to launch a developer portal and the technology and business considerations required to open up its APIs.
App modernization projects are hard. Enterprises are looking to cloud-native platforms like Pivotal Cloud Foundry to run their applications, but they’re worried about the risks inherent to any replatforming effort.
Fortunately, several repeatable patterns of successful incremental migration have emerged.
In this webcast, Google Cloud’s Prithpal Bhogill and Pivotal’s Shaun Anderson will discuss best practices for app modernization and securely and seamlessly routing traffic between legacy stacks and Pivotal Cloud Foundry.
Apigee's Ed Anuff and Bala Kasiviswanathan will discuss how these forces inform and drive the Apigee product roadmap. Join Ed and Bala for a preview of how Apigee will deliver on its product goals, including a common stack that enables us to address our customers' multi-cloud opportunity. Learn how we'll help companies transition to the PaaS/cloud-native future, how we'll leverage Google's OSS presence, and how we will continue to emphasize the needs of developers.
We'll explore how 4 forces will impact the API market over the next two to four years, and how hybrid- and multi-cloud, open source, developer-led adoption, and cloud-native application architecture are driving profound changes in the API market.
With a focus on three core customer strategies: convenience, loyalty, and extraordinary customer and patient care, Walgreens uses Apigee to: connect digital experiences directly to stores; extend its assets into innovative ecosystems and increase the value of its stores; improve the developer experience
Learn how to deploy a lean API runtime infrastructure in your private enterprise environment while getting all the benefits of Apigee Edge API management in the cloud.
Dive into a reference architecture that demonstrates the patterns and practices for securely connecting microservices together using Apigee Edge integration for Pivotal Cloud Foundry.
We will discuss:
- basics for building cloud-native applications as microservices on - Pivotal Cloud Foundry using Spring Boot and Spring Cloud Services
- patterns and practices that are enabling small autonomous microservice teams to provision backing services for their applications
- how to securely expose microservices over HTTP using Apigee Edge for PCF
Watch the webcast here: https://youtu.be/ETT6WP-3me0
Pitney Bowes uses API management to deliver a broad set of cloud-based digital ecommerce capabilities, enable extensive partnerships, and optimize its own operations.
Microservices Done Right: Key Ingredients for Microservices SuccessApigee | Google Cloud
70% of organizations claim to be using or investigating this new trend because the promise of faster innovation, and the ability to independently develop, deploy, and scale components of large applications is hard to resist.
But, challenges exist—both known and unknown. Watch this webcast to identify key ingredients of microservices success.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
6. Benefits of monolithic architecture
● Relatively straightforward
● Easier to develop, test, and debug code bundled together in a single executable process
● Fairly easy to reason about data flows and application behavior
● Deployment model is easy
● Application is deployed as a unit
● Scaling model is simple
● Scale up by installing on a more powerful server
● Scale out by placing behind a load balancer to distribute requests
6
7. Deployment process historically significant
● Creating and maintaining server environments has historically been
● laborious and time-consuming
● expensive
● Consequently this imposed a practical limit on staging environments
● Typical staged pipeline would include some fixed number of servers
● (local) - developer's workstation
● dev (or sandbox) - first stage for developers to merge and test
● int - integration stage for developers to test with external databases and other services
● qa (or test) - for functional and other types of testing
● uat - customer user experience testing and demo
● preprod (or staging) - exact replica of production environment for final verification tests
● production - live site
7
8. Things improved over time
with the introduction of
● virtual machine technology
● vagrant
● puppet & chef
Nevertheless, despite positive labor, time, and infrastructure savings due
to virtualization, provisioning environments still remained burdensome.
8
9. How is deployment process significant?
If the process to provision environments and move an application through
a pipeline is laborious and time-consuming, it makes sense to coordinate
larger releases that are worth the effort.
9
10. Ramifications
● Inhibits continuous delivery
● Continuous delivery attempts to reduce the cost and risk associated with delivering
incremental changes in large part by automating as much of the deployment pipeline as
possible
● Nevertheless, as a practical matter for large complex applications there is too much effort,
cost, and risk involved in deploying an entire application just to update a single feature
● Sub-optimal scalability
● Practical limit on scaling up
● Scaling out is a relatively expensive and inefficient way to scale
● Not all components are under the same load, but can't scale out at the individual component
level because the unit of scaling is the entire application
10
11. Ramifications (cont'd)
● Adverse impact on development
● Larger codebases are more difficult to maintain and extend, and as cognitive overhead
increases
● individual team members become less effective, contributions take greater effort
● quality is adversely affected
● Requires greater coordination among teams
● everything needs to be in phase for integration
● Entire team forced to march at cadence dictated by application release cycle
● Leads to more epic releases, with all the inherent effort and risks that implies
11
13. Driving the microservice trend
● Proliferation of different types of connected devices, leading to an
emphasis on APIs, not applications
● Technology that makes distributed architecture as an alternative to
monolithic architecture easier
13
15. Consequences of emphasis on APIs
● It becomes desirable for APIs to independently
● evolve
● deploy
● scale
● Potential for reducing codebase complexity
● through separation of concerns at a physical level
● codebase partitions at functional boundaries instead of layered boundaries
● Potential for reducing development friction
● Developers liberated from the constraint of delivering and integrating functionality as part
of a larger complex bundle
● API teams can move at their own cadence and deploy more frequently
15
16. Enter Docker
● Container technology based on a legacy that goes back to
● chroot
● FreeBSD jails
● Solaris zones
● cgroups
● Linux containers (LXC)
● Provides the ability to run processes in isolated operating environments
● A Docker host provides the ability to run processes in isolation from each other
● grants controlled access to system resources and dedicated network configuration
● Unlike VMs, containers use a shared operating system kernel (don't need a guest OS)
● By not virtualizing hardware, containers are far more efficient in terms of system resources
● Containers launch essentially as quickly as a process can be started
Docker provides lightweight, isolated micro operating environments
with native process performance characteristics that make
microservice architecture practical.
16
20. Enter Node
● Docker provides an efficient operating environment for isolated
processes, but doesn't have anything to do with how the process is
developed
● Introduced in 2009, Node.js leverages Google's high performance V8
engine for running JavaScript
● JavaScript was a natural fit for a platform-wide asynchronous callback
programming model that exploited event-driven, non-blocking I/O
Node provides a platform for building lightweight, fast, and highly
scalable network services ideal for serving modern web APIs.
20
21. 21
Node network server
21
Constructing a high performance, callback-based HTTP server is as simple as the
following script:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'application/json'});
res.end('{ "message": "Hello World" }');
}).listen(3000);
console.log('Server listening at http://127.0.0.1:1337/');
25. Node's advantages for microservices
● Lightweight HTTP server processes
Node runtime is based on Google's well-regarded open source, high performance V8 engine:
● Compiles JavaScript to native machine code
● Machine code undergoes dynamic optimization during runtime
● V8 is highly tuned for fast startup time, small initial memory footprint, and strong peak
performance
● Highly scalable
● Node platform designed from the onset for end-to-end asynchronous I/O for high scalability
● No extraordinary operating requirements to support high scalability, so cost and complexity
are not special concerns for deployment
● Lightweight for developers
● Minimal ceremony involved in creating, publishing, and consuming packages
● Encourages high degree of modularization with lightweight, tightly-focused packages
● Easy to scaffold network services
25
26. Recommendation for teams
● Don't complicate things
● The Java and .NET platforms evolved in part to address the burden of developing, configuring
and deploying a complex codebase, bundle of related artifacts, and requisite services
● The cornerstone of these platforms are type-safe, object-oriented programming languages with
heavy emphasis on class hierarchies, domain models, and design patterns like inversion of
control through dependency injection, to help developers mediate the challenges of large
codebases
● Node microservices should be small, easy to reason about, test, and debug
● Polyglot is OK
● JavaScript isn't the best language for everything
● Use workers for compute-intensive work or for work best implemented with another language
more suited for particular types of computing problems
● Use Node as the common REST API layer -- don't be polyglot for REST API code
● This is the glue layer that receives requests and validates API contracts, packages results in HTTP
26
27. Takeaway
The convergence of Node and Docker container technology is very well
suited for implementing microservice architecture
● Docker makes running server processes in isolated compute environments (containers)
cheap and easy. Containers are extremely efficient in terms of system resources and
provide excellent performance characteristics, including fast starts.
● Node provides a high performance platform that supports high scalability with lightweight
server processes. Its simple package management system makes creating, publishing,
and consuming packages easy, facilitating and streamlining the process of building and
deploying lightweight services.
● Organizations can achieve higher productivity and quality overall because developers
focus their energy on building smaller, narrowly-focused services partitioned along
functional boundaries. There is less friction and cognitive overhead with this approach, and
services can evolve, be deployed, and scale independently of others.
27
30. 30
Create a new machine (Docker host)
30
docker-machine create --driver driver-name machine-name
docker-machine create –d driver-name machine-name
$ docker-machine create --driver virtualbox machine1
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env machine1
$
31. 31
List machines again
31
Can see the new Docker machine is running
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Saved
machine2 virtualbox Running tcp://192.168.99.100:2376
$
32. 32
Tell Docker client to use the new machine
32
eval "$(docker-machine env machine-name)"
$ docker-machine env machine1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"
export DOCKER_MACHINE_NAME="machine1"
# Run this command to configure your shell:
# eval "$(docker-machine env machine1)"
$
33. 33
Tell Docker client to use the new machine
33
eval "$(docker-machine env machine-name)"
$ docker-machine env machine1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"
export DOCKER_MACHINE_NAME="machine1"
# Run this command to configure your shell:
# eval "$(docker-machine env machine1)"
$
Displays
environment
settings you
should use to
configure your
shell
34. 34
Tell Docker client to use the new machine
34
eval "$(docker-machine env machine-name)"
$ docker-machine env machine1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"
export DOCKER_MACHINE_NAME="machine1"
# Run this command to configure your shell:
# eval "$(docker-machine env machine1)"
$ eval "$(docker-machine env machine1)"
$ Evaluates the
environment
settings in the
current shell
Displays
environment
settings you
should use to
configure your
shell
35. 35
Stop and start a machine
35
docker-machine stop|start machine-name
$ docker-machine stop machine1
$ docker-machine start machine1
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
$
39. 39
Create a DigitalOcean personal access token:
https://cloud.digitalocean.com/settings/applications
$ export DIGITALOCEAN_ACCESS_TOKEN='...'
1
40. 40
Create a DigitalOcean personal access token:
https://cloud.digitalocean.com/settings/applications
$ export DIGITALOCEAN_ACCESS_TOKEN='...'
Create a machine
$ docker-machine create --driver digitalocean demo
Creating SSH key...
Creating Digital Ocean droplet...
To see how to connect Docker to this machine, run: docker-machine env demo
1
2
41. 41
Create a DigitalOcean personal access token:
https://cloud.digitalocean.com/settings/applications
$ export DIGITALOCEAN_ACCESS_TOKEN='...'
Create a machine
$ docker-machine create --driver digitalocean demo
Creating SSH key...
Creating Digital Ocean droplet...
To see how to connect Docker to this machine, run: docker-machine env demo
Set docker client shell environment
$ eval "$(docker-machine env demo)"
1
2
3
42. 4242
Create a DigitalOcean personal access token:
https://cloud.digitalocean.com/settings/applications
$ export DIGITALOCEAN_ACCESS_TOKEN='...'
Create a machine
$ docker-machine create --driver digitalocean demo
Creating SSH key...
Creating Digital Ocean droplet...
To see how to connect Docker to this machine, run: docker-machine env demo
Set docker client shell environment
$ eval "$(docker-machine env demo)"
List the machine
NAME ACTIVE DRIVER STATE URL SWARM
demo digitalocean Running tcp://107.170.201.137:2376
1
2
3
4
44. 44
Launch a container to run a command
44
docker run --rm image [cmd]
$ docker run --rm alpine echo hello
hello
45. 45
Launch a container to run a command
45
docker run --rm image [cmd]
$ docker run --rm alpine echo hello
hello
Create a container from
this image
46. 46
Launch a container to run a command
46
docker run --rm image [cmd]
$ docker run --rm alpine echo hello
hello
Run this command in the
container
Create a container from
this image
47. 47
Launch a container to run a command
47
docker run --rm image [cmd]
$ docker run --rm alpine echo hello
hello
Automatically clean up
(remove the container's file
system) when the container
exits
Run this command in the
container
Create a container from
this image
48. 48
Launch a container to run an interactive command
48
docker run --rm -i -t image [cmd]
docker run --rm -it image [cmd]
docker $ docker run -it --rm alpine sh
/ # ls -l
total 48
drwxr-xr-x 2 root root 4096 Jun 12 19:19 bin
drwxr-xr-x 5 root root 380 Oct 13 02:18 dev
drwxr-xr-x 15 root root 4096 Oct 13 02:18 etc
drwxr-xr-x 2 root root 4096 Jun 12 19:19 home
drwxr-xr-x 6 root root 4096 Jun 12 19:19 lib
lrwxrwxrwx 1 root root 12 Jun 12 19:19 linuxrc -> /bin/busybox
drwxr-xr-x 5 root root 4096 Jun 12 19:19 media
drwxr-xr-x 2 root root 4096 Jun 12 19:19 mnt
dr-xr-xr-x 150 root root 0 Oct 13 02:18 proc
drwx------ 2 root root 4096 Oct 13 02:18 root
drwxr-xr-x 2 root root 4096 Jun 12 19:19 run
drwxr-xr-x 2 root root 4096 Jun 12 12 19:19 sbin
dr-xr-xr-x 13 root root 0 Oct 13 02:18 sys
drwxrwxrwt 2 root root 4096 Jun 12 19:19 tmp
drwxr-xr-x 7 root root 4096 Jun 12 19:19 usr
drwxr-xr-x 9 root root 4096 Jun 12 19:19 var
Run this command
in the container
sh is an interactive
command...
49. 49
Launch a container to run an interactive command
49
docker run --rm -i -t image [cmd]
docker run --rm -it image [cmd]
docker $ docker run --rm -it alpine sh
/ # ls -l
total 48
drwxr-xr-x 2 root root 4096 Jun 12 19:19 bin
drwxr-xr-x 5 root root 380 Oct 13 02:18 dev
drwxr-xr-x 15 root root 4096 Oct 13 02:18 etc
drwxr-xr-x 2 root root 4096 Jun 12 19:19 home
drwxr-xr-x 6 root root 4096 Jun 12 19:19 lib
lrwxrwxrwx 1 root root 12 Jun 12 19:19 linuxrc -> /bin/busybox
drwxr-xr-x 5 root root 4096 Jun 12 19:19 media
drwxr-xr-x 2 root root 4096 Jun 12 19:19 mnt
dr-xr-xr-x 150 root root 0 Oct 13 02:18 proc
drwx------ 2 root root 4096 Oct 13 02:18 root
drwxr-xr-x 2 root root 4096 Jun 12 19:19 run
drwxr-xr-x 2 root root 4096 Jun 12 12 19:19 sbin
dr-xr-xr-x 13 root root 0 Oct 13 02:18 sys
drwxrwxrwt 2 root root 4096 Jun 12 19:19 tmp
drwxr-xr-x 7 root root 4096 Jun 12 19:19 usr
drwxr-xr-x 9 root root 4096 Jun 12 19:19 var
Run this command
in the container
sh is an interactive
command...
By default, the console is attached to all 3
standard streams of the process.
-i (--interactive) keeps STDIN open
-t allocates a pseudo-TTY (expected by
most command line processes) so you can
pass signals, like Ctrl-C (SIGINT)
The combination is needed for interactive
processes, like a shell
51. 51
Pull the latest node image
51
docker pull node
$ docker pull node
Using default tag: latest
latest: Pulling from library/node
843e2bded498: Pull complete
8c00acfb0175: Pull complete
8b49fe88b40b: Pull complete
20b348f4d568: Pull complete
16b189cc8ce6: Pull complete
116f2940b0c5: Pull complete
1c4c600b16f4: Pull complete
971759ab10fc: Pull complete
bdf99c85d0f4: Pull complete
a3157e9edc18: Pull complete
library/node:latest: The image you are pulling has been verified. Important: image verification is a tech preview
feature and should not be relied on to provide security.
Digest: sha256:559f91e2f6823953800360976e42fb99316044e2f9242b4f322b85a4c23f4c4f
Status: Downloaded newer image for node:latest
52. 52
Run a container and display Node/npm version
52
docker run --rm node node -v; npm -v
$ docker run --rm node node -v; npm -v
v4.1.2
2.14.4
53. 53
Run a container to evaluate a Node statement
53
docker run --rm node node --eval "..."
$ docker run --rm node node -e "console.log('hello')"
hello
54. 54
Run the Node REPL in a container
54
docker run -it --rm node node
$ docker run --rm -it node node
> console.log('hello')
hello
undefined
>
61. 61
Build the Docker image for the app
61
docker build -t image-tag .
$ docker build -t demo-app:v1 .
Sending build context to Docker daemon 5.12 kB
The path (or url to a Git repo) defines
the Docker build context.
All files are sent to the Docker daemon
and are available to Dockerfile
commands while building the image.
62. 62
Build the Docker image for the app
62
docker build -t image-tag .
$ docker build -t subfuzion/demo-app:v1 .
Sending build context to Docker daemon 5.12 kB
Docker images get assigned
Image IDs automatically, but you
should also provide a tag in this
form if you plan on publishing it:
user/repo:tag
63. 63
Build the Docker image for the app
63
docker build -t image-tag .
$ docker build -t subfuzion/demo-app:v1 .
Sending build context to Docker daemon 5.12 kB
Step 0 : FROM node:onbuild
# Executing 3 build triggers
Trigger 0, COPY package.json /usr/src/app/
Step 0 : COPY package.json /usr/src/app/
---> Using cache
Trigger 1, RUN npm install
Step 0 : RUN npm install
---> Using cache
Trigger 2, COPY . /usr/src/app
Step 0 : COPY . /usr/src/app
---> ab7beb9c0287
Removing intermediate container 676c92cf1528
Execution of the 1st statement in
the Dockerfile
FROM node:onbuild
64. 64
Node base image
64
https://github.com/nodejs/docker-node
4.2/onbuild/Dockerfile
FROM node:4.0.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
Create a directory in the image
and make it the working
directory for subsequent
commands
65. 65
Node base image
65
https://github.com/nodejs/docker-node
4.2/onbuild/Dockerfile
FROM node:4.0.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
Create a directory in the image
and make it the working
directory for subsequent
commands
When this image is used as a base for
another image (child image), these
instructions will be triggered.
As separate steps (layers), copy package.
json, run npm install, and finally copy all
the files (recursively) from the build context.
66. 66
Node base image
66
https://github.com/nodejs/docker-node
4.2/onbuild/Dockerfile
FROM node:4.0.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
Create a directory in the image
and make it the working
directory for subsequent
commands
The command to execute when a
container is started
(can be overridden)
When this image is used as a base for
another image (child image), these
instructions will be triggered.
As separate steps (layers), copy package.
json, run npm install, and finally copy all
the files (recursively) from the build context.
67. 67
Build the Docker image for the app
67
docker build -t image-tag .
$ docker build -t subfuzion/demo-app:v1 .
Sending build context to Docker daemon 5.12 kB
Step 0 : FROM node:onbuild
# Executing 3 build triggers
Trigger 0, COPY package.json /usr/src/app/
Step 0 : COPY package.json /usr/src/app/
---> Using cache
Trigger 1, RUN npm install
Step 0 : RUN npm install
---> Using cache
Trigger 2, COPY . /usr/src/app
Step 0 : COPY . /usr/src/app
---> ab7beb9c0287
Removing intermediate container 676c92cf1528
Step 1 : EXPOSE 3000
---> Running in f16d963adcb4
---> d785b0f27ffa
Removing intermediate container f16d963adcb4
Successfully built d785b0f27ffa
Execution of the 2nd statement
in the Dockerfile
EXPOSE 3000
The app will be listening to port
3000 in the container, but it
can't be accessed outside the
container unless exposed
68. 68
You can also add tags after building an image
68
docker tag image-id-or-tag tag
$ docker tag d785b0f27ffa subfuzion/demo-app:latest
or
$ docker tag subfuzion/demo-app:v1 subfuzion/demo-app:latest
69. 69
List the image
69
docker images
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
subfuzion/demo-app latest d785b0f27ffa 2 minutes ago 644.2 MB
subfuzion/demo-app v1 d785b0f27ffa 2 minutes ago 644.2 MB
70. 70
Run the Node app in a container
70
docker run --rm -t -p host-port:container-port image
$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1
npm info it worked if it ends with ok
npm info using npm@2.14.2
npm info using node@v4.0.0
npm info prestart simple-docker-node-demo@1.0.0
npm info start simple-docker-node-demo@1.0.0
> simple-docker-node-demo@1.0.0 start /usr/src/app
> node app.js
listening on port 3000
71. 71
Run the Node app in a container
71
docker run --rm -t -p host-port:container-port image
$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1
npm info it worked if it ends with ok
npm info using npm@2.14.2
npm info using node@v4.0.0
npm info prestart simple-docker-node-demo@1.0.0
npm info start simple-docker-node-demo@1.0.0
> simple-docker-node-demo@1.0.0 start /usr/src/app
> node app.js
listening on port 3000
Create a container from this
image and run the default
command
(npm start)
72. 72
Run the Node app in a container
72
docker run --rm -t -p host-port:container-port image
$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1
npm info it worked if it ends with ok
npm info using npm@2.14.2
npm info using node@v4.0.0
npm info prestart simple-docker-node-demo@1.0.0
npm info start simple-docker-node-demo@1.0.0
> simple-docker-node-demo@1.0.0 start /usr/src/app
> node app.js
listening on port 3000
Map a port on the docker
machine to the container's
exposed port
73. 73
Run the Node app in a container in detached mode
73
docker run -d -t -p host-port:container-port image
$ docker run -d -t -p 80:3000 --name demo subfuzion/demo-app:v1
be76984370dd8e3aa4066af955eb54ab4116495007b7cd45743700804392555a
$ docker logs demo
npm info it worked if it ends with ok
npm info using npm@2.14.2
npm info using node@v4.0.0
npm info prestart simple-docker-node-demo@1.0.0
npm info start simple-docker-node-demo@1.0.0
> simple-docker-node-demo@1.0.0 start /usr/src/app
> node app.js
listening on port 3000
74. 74
Run the Node app in a container in detached mode
74
docker run -d -t -p host-port:container-port image
$ docker run -d -t -p 80:3000 --name demo subfuzion/demo-app:v1
be76984370dd8e3aa4066af955eb54ab4116495007b7cd45743700804392555a
$ docker logs demo
npm info it worked if it ends with ok
npm info using npm@2.14.2
npm info using node@v4.0.0
npm info prestart simple-docker-node-demo@1.0.0
npm info start simple-docker-node-demo@1.0.0
> simple-docker-node-demo@1.0.0 start /usr/src/app
> node app.js
listening on port 3000
Good idea to name your
containers, especially detached
ones
75. 7575
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
be76984370dd subfuzion/demo-app:v1 "npm start" 2 minutes ago Up 2 minutes
0.0.0.0:8000->3000/tcp demo
$ docker inspect demo
. . .
$ docker stop demo
demo
$ docker rm demo
76. 76
Accessing the running container
76
Mapped Docker machine port to container port, the IP
address will be the IP address of the machine
$ docker-machine ip machine1
192.168.99.100
$ curl http://192.168.99.100:8000
{"message":"hello world"}
# or
$ curl http://$(docker-machine ip machine1):8000
{"message":"hello world"}