If you have ever heard of observability, you may as well have heard of monitoring, telemetry, tracing, and most probably used interchangeably. Our customers are no exception and often come to us with a wrong idea of the technologies to use to tackle each of them. In this session, we will dive into this soup of words, throw light into each of these terms and expose the breath of AWS services that best suit each of the use cases.
Text Analysis: Es el proceso de convertir text no estructurado, a un formato estructurado que este optimizado para la busqueda.
https://youtube.com/shorts/n5t0B3N71vs?feature=share
Query DSL (Domain Specific Language)
https://www.youtube.com/shorts/n5t0B3N71vs
Purpose of the slide: Give a brief background on OpenSearch, the open-source engine that powers Amazon OpenSearch Service.
In January 21, 2021, Elastic NV announced that they would change their software licensing strategy and not release new versions of Elasticsearch and Kibana under the permissive Apache License, Version 2.0 (ALv2) license. Instead, new versions of the software will be offered under the Elastic license, with source code available under the Elastic License or SSPL. These are not open source and do not offer users the freedoms of open source. To ensure that the open source community and our customers continue to have a secure, high-quality, fully open source search and analytics suite, AWS introduced OpenSearch; a community-driven, ALv2 licensed fork of open source Elasticsearch and Kibana.
The ALv2 license gives the open source community and our customers the freedom to use, modify, extend, embed, monetize, resell, and offer OpenSearch as part of their products and services. The announcement of OpenSearch has garnered positive support from the community. Numerous organizations such as SAP, Capital One, Dow Jones, Logz.io, and Red Hat, and individual contributors have expressed interest in joining the project and helping develop OpenSearch.
Amazon OpenSearch is a distributed system.
We have number of different nodes , data nodes which hold the your data and indexing the data, responding to your quires, master nodes – orchestration the cluster and keep the cluster functioning as a whole
Ultrawarm nodes – that are high density storage by S3 by much reduced cost. To store long tail data at much reduced cost.
We have number of security features , we have IAM to provide access to cluster , we have open distro for OpenSearch plugin for providing fine grained access controls to your OpenSearch cluster
We have integration with other services –On the Input side – We have Kinesis Firehose which can push your data to OpenSearch for log workloads, DMS – that can deliver database data into OpenSearch Cloud watch logs that supports lambda batch delivery to to OpenSearch
On the Output side - metrics to cloud watch and audit data goes into cloud trail.
We called it as domain.
Amazon SageMaker is the most complete end-to-end ML service helping our customer through improved agility, productivity, and cost-effectiveness.
We built Amazon SageMaker from the ground up to provide every developer and data scientist with the ability to build, train, and deploy ML models quickly and at lower cost by providing the tools required for every step of the ML development lifecycle in one integrated, fully managed service. In fact, we have launched 50+ capabilities in the past year alone, all aimed at making this process easier for our customers.
And last year we launched Amazon SageMaker Studio to bring this all together in a single pane of glass so that you get access to all your tools in one place.
1. Dataset – Imagenes
2. Una red neuronal convolucional – para extraer el vector de las imagenes
3. Tenemos que tener el modelo accessible – Mejor si es a traves de un endpoint con una API
4. Tenemos que guardar todos los vectores del dataset en Opensearch