This document provides an overview of Docker and ASP.NET Core. It discusses key Docker concepts like images, containers, Dockerfile and Docker Compose. It demonstrates how to install Docker and use common Docker commands. It also covers ASP.NET Core topics such as middleware, Razor Pages, Tag Helpers and dependency injection. The document includes an agenda with Docker and ASP.NET Core demos.
The Velvet Revolution: Modernizing Traditional ASP.NET Apps with DockerElton Stoneman
Using Docker with Windows Server 2016 to modernize ASP.NET applications - a feature-driven approach. Starting with an ASP.NET WebForms apps, how to run the app in Docker and then modernize it using the Docker platform. From NDC London 2017.
Preparing your dockerised application for production deploymentDave Ward
You’ve got your application dockerised for development. That process is working smoothly, and you’re gaining a lot of the benefits that docker gives you - environments are trivial to setup, independent of platform, and they are consistent for everyone on your team.
How do you go about taking the next step so that your application is deployed into a scalable and reliable production setup?
How do you create deployment artefacts which are built with consistency and transparency? How do you manage environment variables between staging and production environments? How do you perform actions / schedule processes in one environment and not another?
In this talk we will discuss what you need to do to get your dockerised application ready for deployment into a production environment.
The Velvet Revolution: Modernizing Traditional ASP.NET Apps with DockerElton Stoneman
Using Docker with Windows Server 2016 to modernize ASP.NET applications - a feature-driven approach. Starting with an ASP.NET WebForms apps, how to run the app in Docker and then modernize it using the Docker platform. From NDC London 2017.
Preparing your dockerised application for production deploymentDave Ward
You’ve got your application dockerised for development. That process is working smoothly, and you’re gaining a lot of the benefits that docker gives you - environments are trivial to setup, independent of platform, and they are consistent for everyone on your team.
How do you go about taking the next step so that your application is deployed into a scalable and reliable production setup?
How do you create deployment artefacts which are built with consistency and transparency? How do you manage environment variables between staging and production environments? How do you perform actions / schedule processes in one environment and not another?
In this talk we will discuss what you need to do to get your dockerised application ready for deployment into a production environment.
DCSF 19 Modernizing Insurance with Docker Enterprise: The Physicians Mutual ...Docker, Inc.
Physicians Mutual, a 117-year-old Nebraska-based insurance company, had worked on modernizing its systems for over a decade. In such a complex industry, any IT refresh can seem like a never-ending journey. The existing application architecture made it difficult to scale or refresh applications individually. Everything was bundled, forcing the IT team to take a "one size fits all" approach that limited legacy modernization and business agility.
To "untie" the bundle and create a more agile and responsive environment, the IT team determined that containerization and Dev/Ops were the answer. In 2017, the company piloted a microservice architecture and an automated pipeline on Docker Engine (CE) to deliver the new corporate API. Following the successful pilot, the company relaunched 11 mission critical services on Docker Enterprise, paving the way for a complete transformation of the company's application architecture. In this session, you'll learn:
<About the journey of transitioning the corporate API from traditional application server deployment to containerized Microservices on Docker Enterprise
<How the company built consensus and kept momentum in an enterprise environment
<About the technologies and frameworks used, including log aggregation, system monitoring, and security
<About key business benefits in agility and responsiveness achieved
Application code and a Dockerfile - that's all it takes for a Microservice? Not quite. Especially the packaging and delivery of Microservices should be given special consideration. In Nico Meisenzahl's webinar you will learn:
How Kaniko can help you improve your automated docker image pipeline?
How Compose and Helm can help you roll out and/or deploy your application?
What can you expect in the future? A brief look at Helm 3.0 and CNAB.
In one of our weekly training, we’ve talked about Git. Here is a quick overview of the main concepts, basic commands and branching strategy, how to work with Git, how to contribute to an OSS project, …
Redis is an open source advanced key-value store, created by antirez. Here is a quick overview of this awesome NoSql DB.
Like a swiss knife, Redis will help you by many ways : LRU cache, high scores, UID generator, queues, social feeds, autocomplete …
Production Ready Containers from IBM and DockerDocker, Inc.
Containers are quickly becoming the default foundation for modern applications. As a public cloud provider, IBM has been an early champion of containers in the cloud and has built an enterprise ready container service as part of IBM Bluemix. IBM has a long heritage of supporting, contributing to, and building offerings on top of open technologies and IBM carries this commitment to the open development of container solutions by being an active/founding member of the Open Containers Initiative and Cloud Native Computing Foundation. In this session, we will explore the enduring commitment to open technology as well as the advantages of using a pure containers service where the user has access to total solution life cycle management through integration of lessons learned, cutting edge enhancements/development and end-to-end support on the user's underlying infrastructure.
We will explore topics such as exploiting bare metal servers, applying overlay networking to containers, ensuring isolation and security in a truly multi-tenant container environment and managing a global service deployment.
Are you still using FTP to deploy your code? Are you still manually performing the same steps of deploying a feature, again and again? How many hours have you spent on ssh-ing into the server, pulling the repo, migrating the database, reloading the web server and so on, for each deployment?? Ever wondered if there is a process as simple as a single click to perform all these steps for you?
Automated Deployment does exactly these things for you. It takes the burden of remembering all the steps required in each deployment process and execute it smoothly.
asp.net vNext is the next major version on .net on the server. It’s a completely new way to work with awesome possibilities ; It contains a new flexible and cross-platform runtime, new modular HTTP request pipeline, Cloud-ready CLR, an unified programming model that combines MVC, Web API, and Web Pages, a no-compilation dev experience, ability to self-host or host on IIS, …
Best of all : it’s Open source in GitHub (https://github.com/aspnet/Home)
Frank van der Linden / elstar IT
Since a few years the Domino server is a real Java server. Which give you plenty of new opportunities, such as real servlets. Get and post data to any database via a Java servlet. Develop once, run everywhere. In this session you will learn what are servlets, how to create a servlet as OSGi plugin. Make use of other plugins projects or third party projects and. Run the servlet on a Domino server and other Java servers, like Websphere Liberty Profile, Tomcat and Wildfly. Last but not least, how to deploy the servlet as OSGi plugin or as jar file or update site to the Domino server.
Introduction to Docker | Docker and Kubernetes TrainingShailendra Chauhan
Learn to build modern infrastructure using docker and Kubernetes containers. Develop and deploy your ASP.NET Core application using Docker. Leverage to learn container technology to build your ASP.NET Core application.
Find your data - use GraphDB capabilities in XPages applications - and beyond ICON UK EVENTS Limited
Oliver Busse / We4IT
Paul Withers / Intec Systems
Relational, NoSQL, NewSQL, Graph: there are a lot of database options out there. The current push from large technology providers, including Microsoft and IBM, is graph. Learn what graph databases are and why they may be a good fit for many Domino applications. Find out about the main open source framework, Apache Tinkerpop, and options based upon it - both open source and proprietary, small and enterprise, on premises and cloud. Then see how you can leverage them today to add value to your existing Domino data, with OpenNTF Domino API's GraphNSF functionality.
CT Software Developers Meetup: Using Docker and Vagrant Within A GitHub Pull ...E. Camden Fisher
This was a talk given at the second CT Software Developers Meetup (http://www.meetup.com/CT-Software-Developers-Meetup/). It covers how NorthPage is using Docker and Vagrant with a home grown Preview tool to increase the efficiency of the GitHub Pull Request Workflow.
Presentation about docker from Java User Group in Ostrava CZ (23th of November 2015). Presented by Martin Damovsky (@damovsky).
Demos are available at https://github.com/damovsky/jug-ostrava-docker
DCSF 19 Modernizing Insurance with Docker Enterprise: The Physicians Mutual ...Docker, Inc.
Physicians Mutual, a 117-year-old Nebraska-based insurance company, had worked on modernizing its systems for over a decade. In such a complex industry, any IT refresh can seem like a never-ending journey. The existing application architecture made it difficult to scale or refresh applications individually. Everything was bundled, forcing the IT team to take a "one size fits all" approach that limited legacy modernization and business agility.
To "untie" the bundle and create a more agile and responsive environment, the IT team determined that containerization and Dev/Ops were the answer. In 2017, the company piloted a microservice architecture and an automated pipeline on Docker Engine (CE) to deliver the new corporate API. Following the successful pilot, the company relaunched 11 mission critical services on Docker Enterprise, paving the way for a complete transformation of the company's application architecture. In this session, you'll learn:
<About the journey of transitioning the corporate API from traditional application server deployment to containerized Microservices on Docker Enterprise
<How the company built consensus and kept momentum in an enterprise environment
<About the technologies and frameworks used, including log aggregation, system monitoring, and security
<About key business benefits in agility and responsiveness achieved
Application code and a Dockerfile - that's all it takes for a Microservice? Not quite. Especially the packaging and delivery of Microservices should be given special consideration. In Nico Meisenzahl's webinar you will learn:
How Kaniko can help you improve your automated docker image pipeline?
How Compose and Helm can help you roll out and/or deploy your application?
What can you expect in the future? A brief look at Helm 3.0 and CNAB.
In one of our weekly training, we’ve talked about Git. Here is a quick overview of the main concepts, basic commands and branching strategy, how to work with Git, how to contribute to an OSS project, …
Redis is an open source advanced key-value store, created by antirez. Here is a quick overview of this awesome NoSql DB.
Like a swiss knife, Redis will help you by many ways : LRU cache, high scores, UID generator, queues, social feeds, autocomplete …
Production Ready Containers from IBM and DockerDocker, Inc.
Containers are quickly becoming the default foundation for modern applications. As a public cloud provider, IBM has been an early champion of containers in the cloud and has built an enterprise ready container service as part of IBM Bluemix. IBM has a long heritage of supporting, contributing to, and building offerings on top of open technologies and IBM carries this commitment to the open development of container solutions by being an active/founding member of the Open Containers Initiative and Cloud Native Computing Foundation. In this session, we will explore the enduring commitment to open technology as well as the advantages of using a pure containers service where the user has access to total solution life cycle management through integration of lessons learned, cutting edge enhancements/development and end-to-end support on the user's underlying infrastructure.
We will explore topics such as exploiting bare metal servers, applying overlay networking to containers, ensuring isolation and security in a truly multi-tenant container environment and managing a global service deployment.
Are you still using FTP to deploy your code? Are you still manually performing the same steps of deploying a feature, again and again? How many hours have you spent on ssh-ing into the server, pulling the repo, migrating the database, reloading the web server and so on, for each deployment?? Ever wondered if there is a process as simple as a single click to perform all these steps for you?
Automated Deployment does exactly these things for you. It takes the burden of remembering all the steps required in each deployment process and execute it smoothly.
asp.net vNext is the next major version on .net on the server. It’s a completely new way to work with awesome possibilities ; It contains a new flexible and cross-platform runtime, new modular HTTP request pipeline, Cloud-ready CLR, an unified programming model that combines MVC, Web API, and Web Pages, a no-compilation dev experience, ability to self-host or host on IIS, …
Best of all : it’s Open source in GitHub (https://github.com/aspnet/Home)
Frank van der Linden / elstar IT
Since a few years the Domino server is a real Java server. Which give you plenty of new opportunities, such as real servlets. Get and post data to any database via a Java servlet. Develop once, run everywhere. In this session you will learn what are servlets, how to create a servlet as OSGi plugin. Make use of other plugins projects or third party projects and. Run the servlet on a Domino server and other Java servers, like Websphere Liberty Profile, Tomcat and Wildfly. Last but not least, how to deploy the servlet as OSGi plugin or as jar file or update site to the Domino server.
Introduction to Docker | Docker and Kubernetes TrainingShailendra Chauhan
Learn to build modern infrastructure using docker and Kubernetes containers. Develop and deploy your ASP.NET Core application using Docker. Leverage to learn container technology to build your ASP.NET Core application.
Find your data - use GraphDB capabilities in XPages applications - and beyond ICON UK EVENTS Limited
Oliver Busse / We4IT
Paul Withers / Intec Systems
Relational, NoSQL, NewSQL, Graph: there are a lot of database options out there. The current push from large technology providers, including Microsoft and IBM, is graph. Learn what graph databases are and why they may be a good fit for many Domino applications. Find out about the main open source framework, Apache Tinkerpop, and options based upon it - both open source and proprietary, small and enterprise, on premises and cloud. Then see how you can leverage them today to add value to your existing Domino data, with OpenNTF Domino API's GraphNSF functionality.
CT Software Developers Meetup: Using Docker and Vagrant Within A GitHub Pull ...E. Camden Fisher
This was a talk given at the second CT Software Developers Meetup (http://www.meetup.com/CT-Software-Developers-Meetup/). It covers how NorthPage is using Docker and Vagrant with a home grown Preview tool to increase the efficiency of the GitHub Pull Request Workflow.
Presentation about docker from Java User Group in Ostrava CZ (23th of November 2015). Presented by Martin Damovsky (@damovsky).
Demos are available at https://github.com/damovsky/jug-ostrava-docker
Java script nirvana in netbeans [con5679]Ryan Cuprak
JavaOne 2016
NetBeans is not just a Java IDE. It supports JavaScript as a first-class citizen and provides a complete integrated development environment. It also provides project types for server-side JavaScript (Node.js) as well as web browsers and mobile (Apache Cordova). In addition, it supports Grunt, Mocha and Selenium, Angular and Knockout, and more. This session provides an update on NetBeans 8.1 and demonstrates the top new JavaScript features. You will see a Node.js application in action, look at the support for JavaScript unit testing, and also see how easy it is to debug an Apache Cordova application running on a tethered iPhone.
Docker is not just about deploying containers to hundreds of servers. Developers need tools that help with day-to-day tasks and to do their job more effectively. Docker is a great addition to most workflows, from starting projects to writing utilities to make development less repetitive. Docker can help take care of many problems developers face during development such as “it works on my machine” as well as keeping tooling consistent between all of the people working on a project. See how easy it is to take an existing development setup and application and move it over to Docker, no matter your operating system.
Microsoft is working hard to modernize the .NET Platform. There are great new frameworks and tools coming, such as .NET Core and ASP.NET Core. The amount of new things is overwhelming, with multiple .NET Platforms (.NET Framework, Unified Windows Platform, .NET Core), multiple runtimes (CoreCLR, CLR, CoreRT), multiple compilers (Roslyn, RyuJIT, .NET Native and LLILC) and much more. This session will bring you up to speed on all this new Microsoft technology, focusing on .NET Core.
But, we will also take a look at the first framework implementation on top op .NET Core for the Web: ASP.NET Core 1.0. You will learn about ASP.NET Core 1.0 and how it is different from ASP.NET 4.6. This will include Visual Studio 2015 support, cross-platform ASP.NET Core and command-line tooling for working with ASP.NET Core and .NET Core projects.
After this session you know where Microsoft is heading in the near future. Be prepared for a new .NET Platform.
What is this Docker and Microservice thing that everyone is talking about? A primer to Docker and Microservice and how the two concepts complement each other.
(DVO305) Turbocharge YContinuous Deployment Pipeline with ContainersAmazon Web Services
It worked on my machine! How many times have you heard (or even said) this sentence? Keeping consistent environments across your development, test, and production systems can be a complex task. Enter containers! Containers offer a way to develop and test your application in the same environment in which it runs in production. Developers can use tools such as Docker Compose for local testing of complex applications; Jenkins and AWS CodePipeline for building and orchestration; and Amazon ECS to manage and scale their containers. Come to this session to learn how to build containers into your continuous deployment workflow, accelerating the testing and building phases and leading to more frequent software releases. Attendees will learn to use Docker containers to develop their applications and test locally with Docker Compose (or Amazon ECS local), integrate containers in building, deploy complex applications on Amazon ECS, and orchestrate continuous development workflows with CodePipeline.
Introduction to node js - From "hello world" to deploying on azureColin Mackay
Slide deck from my talk on Node.js.
More information is available here: http://colinmackay.scot/2014/11/29/dunddd-2014-introduction-to-node-jsfrom-hello-world-to-deploying-on-azure/
In this talk, I'd go through the Evolution of JavaScript build tools, their features of most javascript build tools and what we should be expecting in the future from build tools.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
14. Docker File
• Text file used to build Docker images
• Contains build instructions
• Instructions create intermediate image that can be cached to speed
up future builds
FROM ENTRYPOINT ENV
RUN WORKDIR VOLUME
COPY EXPOSE CMD
15. Container Network
• Networking with containers is a very important feature
• Allows users to define their own networks and connect containers to
them
• Create a network on a single host or a network that spans across
multiple hosts
docker network create [OPTIONS] my-network
docker run –d --net=my-network --name my-container mongo
docker network ls
docker network inspect my-network
16. Docker Compose
• Manage the whole application lifecycle
• Build service configuration file (docker-compose.yml)
17. Docker Swarm
• Container orchestration platform
• Developed by Docker
• Open-source
• 3500+ commits, 150+ contributors
• Tightly integrated into the Docker ecosystem
• Use Docker API and networking
• Zero single-point-of-failure architecture
• Simple installation
• Easier learning curve
18. Kubernete
• Container orchestration platform
• Built by Google
• Open-source
• 65000+ commits, 1500+ contributors
• More mature
• Largest market share
• More extensive and customizable
• Manual installation
• Serious planning to make up and down
• Steep learning curve
19. Asp.Net Core
• Open Source
• Improved Performance
• Host on IIS, Nginx, Apache or self-host
• Ability to build and run on Windows, macOS, and Linux
• Built-in dependency injection
• Light-weight and modular HTTP request pipeline
• Environment-based configuration
• Simplified csproj file
• Ships entirely as NuGet packages
• View Component, Tag Helpers
25. Middleware
• Asp.net Core Apps are built on the middleware pipeline
• Opposed to HttpHandlers and HttpModules, middleware is created
and used programmatically without a config file
• Add and Remove easily in the Configure method of the Startup
26. Middleware
Some of the built-in middlewares :
• Authentication
• CORS
• Response caching
• Response compression
• Routing
• Session
• Static files
• URL rewriting
28. Razor Pages
• @page
• MVVM Framework
• Two-way Data Binding
• Razor support
• Tag Helpers
• HTML Helpers
• Handlers (OnGet, OnGetAsync, OnPostAsync etc..)
29. View Component
• Create more Reusable Components than Partial Views
• More Powerful
• Have parameters and business logic
• Loaded from External Assemblies
• Used whenever need templates for views, for rendering a group of
elements, and associating server code with it
30. Tag Helpers
• Allow server-side code to be used when creating and rendering HTML
elements
• ASP.NET Core 2.0 already provides many built-in Tag Helpers
• Used whenever append behavior to a single HTML element
Docker is available for implementation across a wide range of platforms:
Desktop: Mac OS, Windows 10.
Server: Various Linux distributions and Windows Server 2016.
Cloud: Amazon Web Services, Google Compute Platform, Microsoft Azure, IBM Cloud, and more.
Docker Daemon: A persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and processes them.
Docker Engine REST API: An API used by applications to interact with the Docker daemon; it can be accessed by an HTTP client.
Docker CLI: A command line interface client for interacting with the Docker daemon. It greatly simplifies how you manage container instances and is one of the key reasons why developers love using Docker.
docker run -ti busybox (interactive)
kill all running containers with docker kill $(docker ps -q)
delete all stopped containers with docker rm $(docker ps -a -q)
delete all images with docker rmi $(docker images -q)
Running Container & Inspect
Volume: Special type of directory in a container typically referred to as a “data volume”
Can be shared and reused among containers
Updates to an image won’t affect a data volume
docker run –p 8080:3000 –v $(pwd):/app microsoft/aspnetcore-build
docker inspect containerId
FROM creates a layer from the ubuntu:15.04 Docker image.
COPY adds files from your Docker client’s current directory.
RUN builds your application with make.
CMD specifies what command to run within the container.
CMD [“/bin/bash”, “-c”, “dotnet restore && dotnet run”]
EXPOSE DockerHostPort
ENTRYPOINT dotnet myapp.dll
All Docker installations represent the docker0 network with bridge; Docker connects to bridge by default.
Run ifconfig on the Linux host to view the bridge network
Docker-compose.yml: Build, environment, image, networks, volumes, ports
docker-compose build
docker-compose up (create and start containers)
docker-compose down (stop and remove containers)
docker-compose logs
docker-compose ps
docker-compose start
docker-compose stop
docker-compose rm
using an internal cluster management system called Borg (sometimes referred to as Omega)
due to being out longer (v1.0 came out in July 2015)
The Program class is the main entry point for ASP.NET Core 2.0 applications. In fact, ASP.NET Core 2.0 applications are very similar to standard .NET Framework console applications in this regard
The new Microsoft.AspNet.Core.All package contains all ASP.NET Core 2.0 features in a single library.(you can still reference them manually instead of using the single packages) The runtime store is an important new component shipped with ASP.NET Core 2.0. It contains compiled packages, which were compiled using the native machine language and it is key for improved performance. All applications using the Microsoft.AspNet.Core.All package benefit from it
Docker container images including ASP.NET Core 2.0 applications are much smaller than images with classic ASP.NET applications, meaning that they are faster to deploy and to start-up.
Previous versions of ASP.NET had a very close relation with Internet Information Services (IIS),
Wanting to change this, Microsoft defined the Open Web Interface for .NET (OWIN) specification
.NET Core borrowed heavily from the OWIN specification. There are no more Global.asax, web.config, or machine.config configuration files, modules or handlers
Some bootstrap code declares a class that contains a convention-defined method (Startup will be used, if no class is declared)
This conventional method, which should be called Configure, receives as its sole parameter a pointer to an IApplicationBuilder instance
You then start adding middleware to the IApplicationBuilder; this middleware is what will handle your web requests
Microsoft estimates that about 70% of the libraries on NuGet should just work with .NET Core now
WCF, WPF API
Additional Topics
HostingEnvironment => launchSettings.json, Command-line arguments, encrypted user store, custom provider
ASPNETCORE_ENVIRONMENT => Development, Staging, Production
An HttpModule runs for each request before arriving at the handler that generates the response, and/or after it generates the response
A Handler handles the request and generates the response for a given file extension
_ViewImports.cshtml @using AspNetCoreDemo
@addTagHelper*, Microsoft.AspNetCore.Mvc.TagHelpers
_ViewStart.cshtml @{ Layout = "~/Views/Shared/_Layout.cshtml"; }
Access to these files is not available by default app.UseStaticFiles();
Note that by default all static files served by this middleware are public and anyone can access them. If you need to protect some of your files, you need to either store them outside the wwwroot folder or you need to use the FileResult controller action, which supports the authorization middleware.
Activate Session: app.UseSession();
Instance: The instance is given all the time. We are responsible for its creation.
Transient: The instance is created each time we use the object.
Scoped: The instance is created once by the HTTP request.
Singleton: The instance is created once by the application instance.
Kestrel is the default, multi-platform, web server. It offers acceptable performance, but lacks lots of features that are expected in real-life:
No buffering
No support for Windows authentication (as time passes, this is less of a problem)
No WebSockets
No HTTP/2
No direct file transmission
No strong security protection