The document describes deploying Cosmos DB resources using Terraform in Azure. It outlines prerequisites, environment details, and the configuration files and process used to create a resource group, Cosmos DB account, database, and collection. The main.tf file defines these resources, variables.tf contains configurable values, and output.tf displays output after deployment. Running terraform init and terraform plan commands prepares for deploying the resources.
Burn down the silos! Helping dev and ops gel on high availability websitesLindsay Holmwood
HA websites are where the rubber meets the road - at 200km/h. Traditional separation of dev and ops just doesn't cut it.
Everything is related to everything. Code relies on performant and resilient infrastructure, but highly performant infrastructure will only get a poorly written application so far. Worse still, root cause analysis in HA sites will more often than not identify problems that don't clearly belong to either devs or ops.
The two options are collaborate or die.
This talk will introduce 3 core principles for improving collaboration between operations and development teams: consistency, repeatability, and visibility. These principles will be investigated with real world case studies and associated technologies audience members can start using now. In particular, there will be a focus on:
- fast provisioning of test environments with configuration management
- reliable and repeatable automated deployments
- application and infrastructure visibility with statistics collection, logging, and visualisation
Burn down the silos! Helping dev and ops gel on high availability websitesLindsay Holmwood
HA websites are where the rubber meets the road - at 200km/h. Traditional separation of dev and ops just doesn't cut it.
Everything is related to everything. Code relies on performant and resilient infrastructure, but highly performant infrastructure will only get a poorly written application so far. Worse still, root cause analysis in HA sites will more often than not identify problems that don't clearly belong to either devs or ops.
The two options are collaborate or die.
This talk will introduce 3 core principles for improving collaboration between operations and development teams: consistency, repeatability, and visibility. These principles will be investigated with real world case studies and associated technologies audience members can start using now. In particular, there will be a focus on:
- fast provisioning of test environments with configuration management
- reliable and repeatable automated deployments
- application and infrastructure visibility with statistics collection, logging, and visualisation
37 slides about taking care of your SolrCluster - Collections API, Core API, dynamic schema modification, segment merging, hard vs. soft commit, caches, monitoring, performance, JMX, it's all in here.
Thinking outside the box, learning a little about a lotMark Broadbent
Being a SQL specialist is not enough. Windows and the interfaces SQL consumes are becoming ever more complex. Being aware of these technologies and beyond can make you a better DBA. You will learn how to become a DBA 2.5, SQL Enterprise name resolution strategies, create your own private VSAN & VCluster, fail instances BETWEEN separate clusters and use the CLR to manage the OS from SQL.
This tutorial will guide you through the many considerations when deploying a sharded cluster. We will cover the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases, and how data is managed within a sharded cluster. Maintaining a sharded cluster also has its challenges. We will review these challenges and how you can prevent them with proper design or ways to resolve them if they exist today. Additional topics like tag aware sharding (Zones), disaster recovery, and data streaming will also be covered.
These slides explain the introduction of Ansible within the ideato scenario, the comparison of ansible with puppet, a series of workflow by the current to that in the immediate up to an optimum. There is also a demo where I build an automated Elasticsearch cluster on AWS using Ansible playbooks.
Czym jest webpack i dlaczego chcesz go używać?Marcin Gajda
Podczas tworzenia frontendu aplikacji internetowych często odkrywamy, że nasza baza kodu JavaScript dość szybko się rozrasta i lawinowo przybywa nam zależności. Oczywistym rozwiązaniem wydaje się wtedy dzielenie kodu na mniejsze moduły, ale jak to robić mądrze? Tu z pomocą przychodzi nam webpack. Podczas tej prezentacji dowiemy się, w jaki sposób działa to narzędzie, jak konfiguruje się w nim kompilację assetów oraz jakie dodatkowe możliwości ono w sobie kryje.
A Hands-on Introduction on Terraform Best Concepts and Best Practices Nebulaworks
At our OC DevOps Meetup, we invited Rami Al-Ghami, a Sr. Software engineer at Workday to deliver a presentation on a Hands-On Terraform Best Concepts and Best Practices.
The software lifecycle does not end when the developer packages their code and makes it ready for deployment. The delivery of this code is an integral part of shipping a product. Infrastructure orchestration and resource configuration should follow a similar lifecycle (and process) to that of the software delivered on it. In this talk, Rami will discuss how to use Terraform to automate your infrastructure and software delivery.
by Ben Willett, Solutions Architect, AWS
Database Week at the AWS Loft is an opportunity to learn about Amazon’s broad and deep family of managed database services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon RDS and Amazon Aurora relational databases, Amazon DynamoDB non-relational databases, Amazon Neptune graph databases, and Amazon ElastiCache managed Redis, along with options for database migration, caching, search and more. You'll will learn how to get started, how to support applications, and how to scale.
37 slides about taking care of your SolrCluster - Collections API, Core API, dynamic schema modification, segment merging, hard vs. soft commit, caches, monitoring, performance, JMX, it's all in here.
Thinking outside the box, learning a little about a lotMark Broadbent
Being a SQL specialist is not enough. Windows and the interfaces SQL consumes are becoming ever more complex. Being aware of these technologies and beyond can make you a better DBA. You will learn how to become a DBA 2.5, SQL Enterprise name resolution strategies, create your own private VSAN & VCluster, fail instances BETWEEN separate clusters and use the CLR to manage the OS from SQL.
This tutorial will guide you through the many considerations when deploying a sharded cluster. We will cover the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases, and how data is managed within a sharded cluster. Maintaining a sharded cluster also has its challenges. We will review these challenges and how you can prevent them with proper design or ways to resolve them if they exist today. Additional topics like tag aware sharding (Zones), disaster recovery, and data streaming will also be covered.
These slides explain the introduction of Ansible within the ideato scenario, the comparison of ansible with puppet, a series of workflow by the current to that in the immediate up to an optimum. There is also a demo where I build an automated Elasticsearch cluster on AWS using Ansible playbooks.
Czym jest webpack i dlaczego chcesz go używać?Marcin Gajda
Podczas tworzenia frontendu aplikacji internetowych często odkrywamy, że nasza baza kodu JavaScript dość szybko się rozrasta i lawinowo przybywa nam zależności. Oczywistym rozwiązaniem wydaje się wtedy dzielenie kodu na mniejsze moduły, ale jak to robić mądrze? Tu z pomocą przychodzi nam webpack. Podczas tej prezentacji dowiemy się, w jaki sposób działa to narzędzie, jak konfiguruje się w nim kompilację assetów oraz jakie dodatkowe możliwości ono w sobie kryje.
A Hands-on Introduction on Terraform Best Concepts and Best Practices Nebulaworks
At our OC DevOps Meetup, we invited Rami Al-Ghami, a Sr. Software engineer at Workday to deliver a presentation on a Hands-On Terraform Best Concepts and Best Practices.
The software lifecycle does not end when the developer packages their code and makes it ready for deployment. The delivery of this code is an integral part of shipping a product. Infrastructure orchestration and resource configuration should follow a similar lifecycle (and process) to that of the software delivered on it. In this talk, Rami will discuss how to use Terraform to automate your infrastructure and software delivery.
by Ben Willett, Solutions Architect, AWS
Database Week at the AWS Loft is an opportunity to learn about Amazon’s broad and deep family of managed database services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon RDS and Amazon Aurora relational databases, Amazon DynamoDB non-relational databases, Amazon Neptune graph databases, and Amazon ElastiCache managed Redis, along with options for database migration, caching, search and more. You'll will learn how to get started, how to support applications, and how to scale.
This talk is a very quick intro to Docker, Terraform, and Amazon's EC2 Container Service (ECS). In just 15 minutes, you'll see how to take two apps (a Rails frontend and a Sinatra backend), package them as Docker containers, run them using Amazon ECS, and to define all of the infrastructure-as-code using Terraform.
In this presentation, I am going to briefly talk about 'what cloud is' and highlight the various types of cloud (IaaS, PaaS, SaaS). The bulk of the talk will be about using the fog gem using IaaS. I will discuss fog concepts (collections, models, requests, services, providers) and supporting these with actual examples using fog
Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)Adin Ermie
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
Listen up, developers. You are not special. Your infrastructure is not a beautiful and unique snowflake. You have the same tech debt as everyone else. This is a talk about a better way to build and manage infrastructure: Terraform Modules. It goes over how to build infrastructure as code, package that code into reusable modules, design clean and flexible APIs for those modules, write automated tests for the modules, and combine multiple modules into an end-to-end techs tack in minutes.
You can find the video here: https://www.youtube.com/watch?v=LVgP63BkhKQ
Talk about add proxy user in Spark Task execution time given in Spark Summit East 2017 by Jorge López-Malla and Abel Ricon
full video:
https://www.youtube.com/watch?v=VaU1xC0Rixo&feature=youtu.be
Creating automated release pipelines with VSTS and Kubernetes for ASP.NET Core Microservices (EN) Marc Müller - 4tecture GmbH
Nowadays, everybody is talking about Docker and Microservices. But how does that affect me as a developer working with the Microsoft stack? In this session Marc shows how to build an automated release pipeline using Visual Studio and VSTS which deploys ASP.NET Core Microservices into a Kubernetes infrastructure. In addition to the full developer stack needed to build the pipeline, the session covers advanced topics, such as staging environments and load-balancing.
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...NETWAYS
Physical, virtual, containers. Public cloud, private cloud, hybrid cloud. IaaS, PaaS, SaaS. These are the choices that we're faced with when architecting a datacenter of today. And the choice is not one or the other; it is often a combination of many of these. How do we remain in control of our datacenters? How do we deploy and configure software, manage change across disparate systems, and enforce policy/security? How do we do this in a way that operations engineers and developers alike can rejoice in the processes and workflow?
In this talk, I will discuss the problems faced by the modern datacenter, and how a set of open source tools including Vagrant, Packer, Consul, and Terraform can be used to tame the rising complexity curve and provide solutions for these problems.
SDK de Google Cloud, es un conjunto de herramientas que se utilizan para administrar recursos y aplicaciones alojados en GCP. Incluye las herramientas CLI gsutil, gcloud y bq.
Este documento muestra como instalar, configurar y ejecutar algunos comandos sencillos utilizando gcloud en un entorno Windows.
El propósito de este documento es ofrecer una descripción general de la tecnología de Kubernetes, abordando conceptos básicos incluyendo prácticas de laboratorio esenciales para el entendimiento de un entorno de Kubernetes.
Guía introductoria a la tecnología de contenedores de Docker.
Se incluyen pasos como instalación, operación básica de imagines y contenedores.
• Introducción a los contenedores.
• Que es Docker.
• Arquitectura y componentes principales.
• Docker Hub.
• Instalación de Docker.
• Casos de uso prácticos.
• Uso de Docker compose.
• Introducción a Docker Swarm.
• Repaso de comandos utilizados.
• Otras tecnologías de contenedores.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
1. 1
Desplegar Cosmos DB usando Terraform
Introducción
En este ejercicio se desplegarán recursos de Azure utilizando Terraform, en concreto se van a crear los
siguientes elementos:
• Grupo de recursos.
• Cuenta Cosmos DB.
• Base de datos.
• Colección.
Diagrama conceptual de tarea a realizar.
Prerrequisitos.
• Suscripción Azure activa.
o Si no posee una, crear en: https://azure.microsoft.com/es-mx/free/
• Conocimiento previo de Azure, Cosmos DB, Git y Terraform.
Entorno.
• Windows 10 Home – Core I5 – 12GB RAM.
• Visual Studio Code v1.51.1.
• Chocolatey v0.10.15
• Azure CLI v2.15.1
• Terraform v0.13.5
Algunos conceptos.
Azure CLI es la interfaz de comandos de Azure que se utiliza para crear y administrar recursos de Azure, está
disponible para instalar en entornos Linux, MacOS y Windows.
Chocolatey es un gestor/administrador de paquetes para Windows similar a apt-get.
Cosmos DB es un servicio de bases de datos multimodelo distribuido y con escalado horizontal, permite de
forma nativa modelos de datos de columnas (Cassandra), documentos (SQL y MongoDB), grafos (Gremlin) y
pares clave-valor (Azure Table Storage)
Terraform es un software de código libre que permite el despliegue de infraestructura como código (IaC), fue
creado por HashiCorp, está escrito en Go y soporta una serie de proveedores de nube, entre ellos Azure.
2. Terraform Cosmos DB| Moisés Elías Araya
[2]
Características de Cosmos DB.
Procedimiento
Iniciar consola Power Shell y ejecutar el comando az login, esto abrirá el navegador donde se deben ingresar
las credenciales de la cuenta, una vez ingresado anotar los valores de “homeTenantId” e “id”
PS C:WINDOWSsystem32> az login
The default web browser has been opened at https://login.microsoftonline.com/common/oauth2/authorize.
Please continue the login in the web browser. If no web browser is available or if the web browser
fails to open, use device code flow with `az login --use-device-code`.
You have logged in. Now let us find all the subscriptions to which you have access...
[
{
"cloudName": "AzureCloud",
"homeTenantId": "f8bad4ef-a9e1-4186-bcf2-2351494523da",
"id": "29831166-1ec2-4121-b6ca-7d0b5190218",
"isDefault": true,
"managedByTenants": [],
"name": "Azure subscription 1",
"state": "Enabled",
"tenantId": "f8bad4ef-a9e1-4186-bcf2-2351494523da",
"user": {
"name": "eliasarayam@outlook.cl",
"type": "user"
}
}
]
3. Terraform Cosmos DB| Moisés Elías Araya
[3]
También es posible encontrar estos valores desde Suscripciones y Azure Active Directory.
Archivos de configuración.
La configuración está presente en el siguiente repositorio de Github:
https://github.com/EliasGH/terraformcosmosdb y este consta de 3 archivos; main.tf, variables.tf y output.tf.
Variables.tf
Este archivo contiene las variables del grupo de recursos, ubicación y nombre de cuenta cosmosdb, todos
estos valores son modificables según preferencias.
También están presentes las variables “homeTenantId” e “id”, acá es donde se deben copiar los valores
extraídos del paso anterior.
variable "resource_group_name" {
default = "cosmosdb-rg"
}
variable "resource_group_location" {
default = "eastus"
}
variable "subscription_id" {
default = "29831166-1ec2-4121-b6ca-7d0b5190218c"
}
variable "tenant_id" {
default = "f8bad4ef-a9e1-4186-bcf2-2351494523da"
}
variable "cosmos_db_account_name" {
default = "cosmostf"
}
variable "failover_location" {
default = "eastus2"
}
4. Terraform Cosmos DB| Moisés Elías Araya
[4]
Main.tf
Este archivo contiene las configuraciones principales; se definen el proveedor, el grupo de recursos, la cuenta
cosmos, la base de datos y la colección.
provider "azurerm" {
version = "~> 1.34.0"
subscription_id = "${var.subscription_id}"
tenant_id = "${var.tenant_id}"
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group_name}"
location = "${var.resource_group_location}"
}
resource "azurerm_cosmosdb_account" "acc" {
name = "${var.cosmos_db_account_name}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
offer_type = "Standard"
kind = "GlobalDocumentDB"
enable_automatic_failover = true
consistency_policy {
consistency_level = "Session"
}
geo_location {
location = "${var.failover_location}"
failover_priority = 1
}
geo_location {
location = "${var.resource_group_location}"
failover_priority = 0
}
}
resource "azurerm_cosmosdb_sql_database" "db" {
name = "servicios"
resource_group_name = "${azurerm_cosmosdb_account.acc.resource_group_name}"
account_name = "${azurerm_cosmosdb_account.acc.name}"
}
resource "azurerm_cosmosdb_sql_container" "coll" {
name = "viajes"
resource_group_name = "${azurerm_cosmosdb_account.acc.resource_group_name}"
account_name = "${azurerm_cosmosdb_account.acc.name}"
database_name = "${azurerm_cosmosdb_sql_database.db.name}"
partition_key_path = "/viajesId"
}
Primeramente, se define el proveedor Azure, luego se definen los ids que sirven para informar a Terraform en
que suscripción se van a implementar los recursos de Cosmos DB.
Luego se crea un grupo de recursos el cual es necesario para alojar todos los recursos que se van a crear, se
define un nombre y una ubicación, ambos valores son referenciados al archivo de variables.
A continuación, se configura la cuenta Cosmos DB por medio del recurso azurerm_cosmosdb_account, se
definen el nombre, ubicación, grupo de recursos, tipo de oferta, tipo de cuenta y nivel de consistencia, se
habilita la geo-localización para la replicación geográfica y las prioridades de ubicación ante un error.
Luego se crea una base de datos dentro de esa cuenta, la base de datos se llama servicios y se usa el mismo
grupo de recursos creado anteriormente, el recurso utilizado es azurerm_cosmosdb_sql_database
Finalmente se crea una colección con el nombre de viajes y una clave de partición “/viajesId”, estos recursos
están creados bajo el grupo, cuenta y base de datos.
5. Terraform Cosmos DB| Moisés Elías Araya
[5]
Output.tf
El archivo de salida se utiliza para mostrar información útil al finalizar el proceso de despliegue de recursos,
en este caso se mostrarán los valores de base de datos, connection string, id y el endpoint.
output "databases" {
value = azurerm_cosmosdb_sql_database.db.name
}
output "endpoint" {
description = "The endpoint used to connect to the CosmosDB account."
value = azurerm_cosmosdb_account.acc.endpoint
}
output "id" {
description = "The ID of the CosmosDB Account."
value = azurerm_cosmosdb_account.acc.id
}
output "cosmos_db_connection_string" {
value = "${azurerm_cosmosdb_account.acc.connection_strings}"
}
Está todo listo para iniciar la implementación.
Inicializar proceso.
El primer paso necesario es ejecutar el comando terraform init.
Este comando crea un nuevo entorno y descarga e instala los binarios necesarios para utilizar el proveedor
seleccionado.
PS C:RepoGITCosmosDB> terraform init
Initializing the backend…
Initializing provider plugins…
- Finding hashicorp/azurerm versions matching “~> 1.34.0”…
- Installing hashicorp/azurerm v1.34.0…
- Installed hashicorp/azurerm v1.34.0 (signed by HashiCorp)
Warning: Interpolation-only expressions are deprecated
on main.tf line 3, in provider “azurerm”:
3: subscription_id = “${var.subscription_id}”
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the “${ sequence from the start and the }”
sequence from the end of this 5ill5e5rm, leaving just the inner 5ill5e5rm.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 13 more similar warnings elsewhere)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running “5ill5e5rm plan” 5ill5e
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands 5ill detect it and remind you to do so if necessary.
Respuesta de comando Terraform init.
6. Terraform Cosmos DB| Moisés Elías Araya
[6]
El segundo paso es crear un plan de ejecución, acá se le indica que acciones y el orden que Terraform
ejecutara las mismas para desplegar recursos, también se valida la sintaxis y como buena práctica, podemos
guardar el plan en un archivo para luego ejecutarlo en el paso siguiente.
Ejecutar el comando Terraform plan con la opción –out plan.out
PS C:RepoGITCosmosDB> terraform plan --out plan.out
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# azurerm_cosmosdb_account.acc will be created
+ resource "azurerm_cosmosdb_account" "acc" {
+ connection_strings = (sensitive value)
+ enable_automatic_failover = true
+ enable_multiple_write_locations = false
+ endpoint = (known after apply)
+ id = (known after apply)
+ is_virtual_network_filter_enabled = false
+ kind = "GlobalDocumentDB"
+ location = "eastus"
+ name = "cosmostf"
+ offer_type = "Standard"
+ primary_master_key = (sensitive value)
+ primary_readonly_master_key = (sensitive value)
+ read_endpoints = (known after apply)
+ resource_group_name = "cosmosdb-rg"
+ secondary_master_key = (sensitive value)
+ secondary_readonly_master_key = (sensitive value)
+ tags = (known after apply)
+ write_endpoints = (known after apply)
+ consistency_policy {
+ consistency_level = "Session"
+ max_interval_in_seconds = 5
+ max_staleness_prefix = 100
}
+ geo_location {
+ failover_priority = 0
+ id = (known after apply)
+ location = "eastus"
}
+ geo_location {
+ failover_priority = 1
+ id = (known after apply)
+ location = "eastus2"
}
}
# azurerm_cosmosdb_sql_container.coll will be created
+ resource "azurerm_cosmosdb_sql_container" "coll" {
+ account_name = "cosmostf"
+ database_name = "servicios"
+ id = (known after apply)
+ name = "viajes"
+ partition_key_path = "/viajesId"
+ resource_group_name = "cosmosdb-rg"
}
# azurerm_cosmosdb_sql_database.db will be created
+ resource "azurerm_cosmosdb_sql_database" "db" {
+ account_name = "cosmostf"
+ id = (known after apply)
+ name = "servicios"
+ resource_group_name = "cosmosdb-rg"
}
# azurerm_resource_group.rg will be created
+ resource "azurerm_resource_group" "rg" {
+ id = (known after apply)
+ location = "eastus"
+ name = "cosmosdb-rg"
7. Terraform Cosmos DB| Moisés Elías Araya
[7]
+ tags = (known after apply)
}
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ cosmosdb_connectionstrings = (sensitive value)
+ databases = "servicios"
+ endpoint = (known after apply)
+ id = (known after apply)
Warning: Interpolation-only expressions are deprecated
on main.tf line 3, in provider "azurerm":
3: subscription_id = "${var.subscription_id}"
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 13 more similar warnings elsewhere)
------------------------------------------------------------------------
This plan was saved to: plan.out
To perform exactly these actions, run the following command to apply:
terraform apply "plan.out"
El resultado del plan indica que se van a crear 4 recursos; el grupo de recursos, la cuenta, la base de datos y
la colección.
Ahora ejecutar el comando terraform apply, este comando es utilizado para aplicar los cambios mostrados en
el paso anterior.
PS C:RepoGITCosmosDB> terraform apply plan.out
azurerm_resource_group.rg: Creating...
azurerm_cosmosdb_account.acc: Creating...
azurerm_cosmosdb_account.acc: Still creating... [10s elapsed]
azurerm_cosmosdb_account.acc: Still creating... [40s elapsed]
azurerm_cosmosdb_sql_database.db: Creating...
azurerm_cosmosdb_sql_database.db: Still creating... [10s elapsed]
azurerm_cosmosdb_sql_database.db: Still creating... [20s elapsed]
azurerm_cosmosdb_sql_database.db: Still creating... [1m0s elapsed]
azurerm_cosmosdb_sql_container.coll: Creating...
azurerm_cosmosdb_sql_container.coll: Still creating... [10s elapsed]
azurerm_cosmosdb_sql_container.coll: Still creating... [50s elapsed]
azurerm_cosmosdb_sql_container.coll: Still creating... [1m0s elapsed]
azurerm_cosmosdb_sql_container.coll:
on main.tf line 3, in provider "azurerm":
3: subscription_id = "${var.subscription_id}"
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 13 more similar warnings elsewhere)
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
cosmos_db_connection_string = [
"AccountEndpoint=https://cosmostf.documents.azure.com:443/;AccountKey=aYNg5E106z6HigfBSsFuJLYmouuqBVgIl
fMyYknSsrk2vSOQVt5UWDIXLcsaSuaVqy9LEQSL6H8TdawaQ85Xgg==;",
8. Terraform Cosmos DB| Moisés Elías Araya
[8]
"AccountEndpoint=https://cosmostf.documents.azure.com:443/;AccountKey=xyDd1FxRhjBYKxP52LzO1zjI7NMcRBuWe
YZyzA0c0DCeaUJ8XTTBIaB3fX0C2UEURwfLZOmJQaKiwQFxYYINTg==;",
"AccountEndpoint=https://cosmostf.documents.azure.com:443/;AccountKey=rkFH9Vs8053GeE2iqDYysNuF2iQP5gNW2
srtXWjMgEGZnGgVAKE7UvtakQ5e4RXmFr8rG3kke1N3BDFSMOkJHg==;",
"AccountEndpoint=https://cosmostf.documents.azure.com:443/;AccountKey=7ReLmKejxvjMK5izqvobz1FoCYlZjnxyF
AfOMyIxz762iUn8S9WVkgC2OGeVgg7RZZTQmh3xzjY79khzQrQfCg==;",
]
databases = servicios
endpoint = https://cosmostf.documents.azure.com:443/
id = /subscriptions/29831166-1ec2-4121-b6ca-7d0b5190218c/resourceGroups/cosmosdb-
rg/providers/Microsoft.DocumentDB/databaseAccounts/cosmostf
El resultado muestra las salidas que se indican en archivo output.tf
El proceso finaliza después de 15 – 20 minutos con el resultado mostrado arriba.
Revisar resultados.
Conectarse a consola web y revisar la creación de los recursos.
9. Terraform Cosmos DB| Moisés Elías Araya
[9]
También es posible acceder al explorador de datos de CosmosDB por medio de la siguiente URL:
https://cosmos.azure.com/
Limpiar/Eliminar recursos.
Para eliminar los recursos creados, ejecutar el comando terraform destroy (esta tarea tarda algunos minutos
en completarse).
PS C:RepoGITCosmosDB> terraform destroy -auto-approve
..salida omitida.
azurerm_resource_group.rg: Destruction complete after 51s
Destroy complete! Resources: 4 destroyed.
#El flag -auto-approve elimina sin confirmación previa.
Referencias y material complementario.
• Documentación Cosmos DB.
o https://docs.microsoft.com/en-us/azure/cosmos-db/introduction
o https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-
practices/naming-and-tagging
o https://docs.microsoft.com/en-us/azure/developer/terraform/deploy-azure-cosmos-db-to-azure-
container-instances
• Documentación Terraform.
o https://learn.hashicorp.com/tutorials/terraform/install-cli
o https://learn.hashicorp.com/terraform
o https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account#c
onnection_strings
• Software de diagramado online.
o https://lucid.app/documents#/dashboard
• Instalación Chocolatey
o https://chocolatey.org/install
10. Terraform Cosmos DB| Moisés Elías Araya
[10]
Anexos.
a- Instalar Azure CLI.
Iniciar PowerShell como administrador y ejecutar el comando:
Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .AzureCLI.msi;
Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .AzureCLI.msi
Validar versión.
PS C:WINDOWSsystem32> az --version
azure-cli 2.15.1
Cerrar y volver a iniciar consola Power Shell.
b- Instalar Terraform.
PS C:WINDOWSsystem32> choco install terraform
Chocolatey v0.10.15
Installing the following packages:
terraform
By installing you accept licenses for the packages.
Progress: Downloading terraform 0.13.5... 100%
terraform v0.13.5 [Approved]
terraform package files install completed. Performing other installation steps.
The package terraform wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Y
Removing old terraform plugins
Downloading terraform 64 bit
from 'https://releases.hashicorp.com/terraform/0.13.5/terraform_0.13.5_windows_amd64.zip'
Progress: 100% - Completed download of
C:UsersUser1AppDataLocalTempchocolateyterraform0.13.5terraform_0.13.5_windows_amd64.zip (33.23
MB).
Download of terraform_0.13.5_windows_amd64.zip (33.23 MB) completed.
Hashes match.
Extracting
C:UsersUser1AppDataLocalTempchocolateyterraform0.13.5terraform_0.13.5_windows_amd64.zip to
exit
C:ProgramDatachocolateylibterraformtools...
C:ProgramDatachocolateylibterraformtools
ShimGen has successfully created a shim for terraform.exe
The install of terraform was successful.
Software installed to 'C:ProgramDatachocolateylibterraformtools'
Chocolatey installed 1/1 packages.
See the log for details (C:ProgramDatachocolateylogschocolatey.log).
Validar instalación consultado la versión instalada.
PS C:WINDOWSsystem32> terraform --version
Terraform v0.13.5
Your version of Terraform is out of date! The latest version
is 0.14.0. You can update by downloading from https://www.terraform.io/downloads.html
PS C:WINDOWSsystem32>
11. Terraform Cosmos DB| Moisés Elías Araya
[11]
c- Habilitar soporte lenguaje Terraform en Visual Studio Code.
Una extensión de código añade compatibilidad de sintaxis para el lenguaje de configuración, algunas
características son: resaltado de sintaxis, validación básica de sintaxis, funciones, etc.
Buscar las opciones disponibles en menú View - Extensions – Escribir Terraform – Seleccionar la extensión -
Instalar y Validar.
Resultado final.