Sep 2012 HUG: Elastic, Multi-tenant, Highly Available Hadoop on DemandYahoo Developer Network
Serengeti is an open-source project, initiated by VMware, to enable the rapid deployment of Hadoop clusters in virtual environments. While Hadoop clusters are typically run on physical machines, Serengeti aims to bridge Hadoop and virtualization, and bring the classic benefits of virtualization to the Hadoop user. Leveraging virtual machines, Serengeti-deployed clusters can be simply operated, configured for HA protection, and made elastic through the decoupling of Hadoop compute and data layers. In this talk, we explore each of these aspects of running Hadoop on a virtual platform.
Presenter: Kevin Leong, Product Manager, VMware
Using Web Data to Drive Revenue and Reduce CostsConnotate
This presentation is designed to help companies strengthen their competitive advantage by leveraging publicly available Web sources.
Entrepreneurs, global industry leaders and enterprises of all sizes are turning Web data into lucrative opportunities – creating new revenue-generating products, reducing costs and re-engineering workflows to optimize pricing, streamline reporting, ensure compliance, engage interactively with clients and more.
This presentation uses a variety of success stories to illustrate ways in which businesses can use Web data to drive revenue and streamline operations.
Do you need to move enterprise database information into a Data Lake in real time, and keep it current? Or maybe you need to track real-time customer actions in order to engage them while they are still accessible. Perhaps you have been tasked with ingesting and processing large amounts of IoT data.
Lensfield is a desktop and filesystem-based tool designed as a “personal data management assistant” for the scientist. It combines distributed version control (DVCS), software transaction memory (STM) and linked open data (LOD) publishing to create a novel data management, processing and publication tool. The application “just looks after” these technologies for the scientist, providing simple interfaces for typical uses. It is built with Clojure and includes macros which define steps in a common workflow. Functions and Java libraries provide facilities for automatic processing of data which is ultimately published as RDF in a web application. The progress of data processing is tracked by a fine-grained data structure that can be serialized to disk, with the potential to include manual steps and programmatic interrupts in largely automated processes through seamless resumption. Flexibility in operation and minimizing barriers to adoption are major design features.
Sep 2012 HUG: Elastic, Multi-tenant, Highly Available Hadoop on DemandYahoo Developer Network
Serengeti is an open-source project, initiated by VMware, to enable the rapid deployment of Hadoop clusters in virtual environments. While Hadoop clusters are typically run on physical machines, Serengeti aims to bridge Hadoop and virtualization, and bring the classic benefits of virtualization to the Hadoop user. Leveraging virtual machines, Serengeti-deployed clusters can be simply operated, configured for HA protection, and made elastic through the decoupling of Hadoop compute and data layers. In this talk, we explore each of these aspects of running Hadoop on a virtual platform.
Presenter: Kevin Leong, Product Manager, VMware
Using Web Data to Drive Revenue and Reduce CostsConnotate
This presentation is designed to help companies strengthen their competitive advantage by leveraging publicly available Web sources.
Entrepreneurs, global industry leaders and enterprises of all sizes are turning Web data into lucrative opportunities – creating new revenue-generating products, reducing costs and re-engineering workflows to optimize pricing, streamline reporting, ensure compliance, engage interactively with clients and more.
This presentation uses a variety of success stories to illustrate ways in which businesses can use Web data to drive revenue and streamline operations.
Do you need to move enterprise database information into a Data Lake in real time, and keep it current? Or maybe you need to track real-time customer actions in order to engage them while they are still accessible. Perhaps you have been tasked with ingesting and processing large amounts of IoT data.
Lensfield is a desktop and filesystem-based tool designed as a “personal data management assistant” for the scientist. It combines distributed version control (DVCS), software transaction memory (STM) and linked open data (LOD) publishing to create a novel data management, processing and publication tool. The application “just looks after” these technologies for the scientist, providing simple interfaces for typical uses. It is built with Clojure and includes macros which define steps in a common workflow. Functions and Java libraries provide facilities for automatic processing of data which is ultimately published as RDF in a web application. The progress of data processing is tracked by a fine-grained data structure that can be serialized to disk, with the potential to include manual steps and programmatic interrupts in largely automated processes through seamless resumption. Flexibility in operation and minimizing barriers to adoption are major design features.
Software Defined anything (SDx) is a movement toward promoting a greater role for software systems in controlling different kinds of hardware - more specifically, making software more "in command" of multi-piece hardware systems and allowing for software control of a greater range of devices.
Software Defined Everything (SDx) includes
Software Defined Networks (SDN)
Software Defined Computing (SDC)
Software Defined Storage (SDS)
Software Defined Data Centers (SDDC)
Cisco Live in booth presentation explaining how Clustered Data ONTAP gives organizations and cloud service providers the capability to rapidly and cost effectively deliver new services and capacity with maximum application uptime.
Denodo in the Age of Containers: How to Simplify Operation of your Virtual LayerDenodo
Watch full webinar here: https://bit.ly/2ZDzkta
Traditional operational tasks like installation, version upgrades, infrastructure scaling and cluster management has been radically transformed with the advent of cloud platforms, containers and orchestration systems. Denodo can take advantage of these environments to create an environment where infrastructure management is a thing of the past, with the goal of reducing operating costs and operating in a much more elastic fashion.
Attend this session to learn:
- What are the new capabilities of Denodo 8 to manage your entire deployment.
- Infrastructure management in AWS and Azure.
- How to use Denodo in a Docker + Kubernetes environment.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
In memory computing principles by Mac Moore of GridGainData Con LA
In the presentation, we will provide an overview of general in-memory computing principles and the drivers behind it. We will start with a summary of the technical drivers (abundant hardware resources) and market forces (the rise of Big Data). We will cover popular and emerging use cases for in-memory computing, from financial industry trading platforms to mobile payment processing, online advertising, online/mobile gaming back-ends and more. We will then present some foundational concepts and terminology, and discuss considerations around any in-memory solution. From there, we will illustrate how a complete in-memory computing stack like GridGain combines clustering, high performance computing, in-memory data grids, stream processing and Hadoop acceleration into one unified and easy to use platform.
Intel and MariaDB: web-scale applications with distributed logsMariaDB plc
The combination of MariaDB and Intel® technologies is extremely powerful in the area of distributed computing. In this session, David Cohen brings you up to speed on the latest Intel technical collaboration with MariaDB – the use of shared, log-structured storage to support the persistence requirements of databases. He details how this cooperation is transforming transaction performance and cost by optimizing the combination of MariaDB and RocksDB-Cloud running on Intel® Xeon® Scalable processors and Intel® Optane™ SSDs.
Analyze your Data Lake, Fast @ Any Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
-Learn how to automatically discover, catalog, and prepare your data for analytics
-Understand how to query data in your data lake without having to transform or load the data into your data warehouse
-See how to analyze data in both your data lake and data warehouse
Software Defined anything (SDx) is a movement toward promoting a greater role for software systems in controlling different kinds of hardware - more specifically, making software more "in command" of multi-piece hardware systems and allowing for software control of a greater range of devices.
Software Defined Everything (SDx) includes
Software Defined Networks (SDN)
Software Defined Computing (SDC)
Software Defined Storage (SDS)
Software Defined Data Centers (SDDC)
Cisco Live in booth presentation explaining how Clustered Data ONTAP gives organizations and cloud service providers the capability to rapidly and cost effectively deliver new services and capacity with maximum application uptime.
Denodo in the Age of Containers: How to Simplify Operation of your Virtual LayerDenodo
Watch full webinar here: https://bit.ly/2ZDzkta
Traditional operational tasks like installation, version upgrades, infrastructure scaling and cluster management has been radically transformed with the advent of cloud platforms, containers and orchestration systems. Denodo can take advantage of these environments to create an environment where infrastructure management is a thing of the past, with the goal of reducing operating costs and operating in a much more elastic fashion.
Attend this session to learn:
- What are the new capabilities of Denodo 8 to manage your entire deployment.
- Infrastructure management in AWS and Azure.
- How to use Denodo in a Docker + Kubernetes environment.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
In memory computing principles by Mac Moore of GridGainData Con LA
In the presentation, we will provide an overview of general in-memory computing principles and the drivers behind it. We will start with a summary of the technical drivers (abundant hardware resources) and market forces (the rise of Big Data). We will cover popular and emerging use cases for in-memory computing, from financial industry trading platforms to mobile payment processing, online advertising, online/mobile gaming back-ends and more. We will then present some foundational concepts and terminology, and discuss considerations around any in-memory solution. From there, we will illustrate how a complete in-memory computing stack like GridGain combines clustering, high performance computing, in-memory data grids, stream processing and Hadoop acceleration into one unified and easy to use platform.
Intel and MariaDB: web-scale applications with distributed logsMariaDB plc
The combination of MariaDB and Intel® technologies is extremely powerful in the area of distributed computing. In this session, David Cohen brings you up to speed on the latest Intel technical collaboration with MariaDB – the use of shared, log-structured storage to support the persistence requirements of databases. He details how this cooperation is transforming transaction performance and cost by optimizing the combination of MariaDB and RocksDB-Cloud running on Intel® Xeon® Scalable processors and Intel® Optane™ SSDs.
Analyze your Data Lake, Fast @ Any Scale - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
-Learn how to automatically discover, catalog, and prepare your data for analytics
-Understand how to query data in your data lake without having to transform or load the data into your data warehouse
-See how to analyze data in both your data lake and data warehouse
Build Data Lakes and Analytics on AWS: Patterns & Best PracticesAmazon Web Services
With over 90% of today’s data generated in the last two years, the rate of data growth is showing no sign of slowing down. In this session, we step through the challenges and best practices for capturing data, understanding what data you own, driving insights, and predicting the future using AWS services. We frame the session and demonstrations around common pitfalls of building data lakes and how to successfully drive analytics and insights from data. We also discuss the architecture patterns brought together key AWS services, including Amazon S3, AWS Glue, Amazon Athena, Amazon Kinesis, and Amazon Machine Learning. Discover the real-world application of data lakes for roles including data scientists and business users.
Stephen Moon, Sr. Solutions Architect, Amazon Web Services
James Juniper, Solution Architect for the Geo-Community Cloud, Natural Resources Canada
Build Data Lakes & Analytics on AWS: Patterns & Best PracticesAmazon Web Services
With over 90% of today’s data generated in the last two years, the rate of data growth is showing no sign of slowing down. In this session, we step through the challenges and best practices for capturing data, understanding what data you own, driving insights, and predicting the future using AWS services. We frame the session and demonstrations around common pitfalls of building data lakes and how to successfully drive analytics and insights from data. We also discuss the architecture patterns brought together key AWS services, including Amazon S3, AWS Glue, Amazon Athena, Amazon Kinesis, and Amazon Machine Learning. Discover the real-world application of data lakes for roles including data scientists and business users.
Stephen Moon, Sr. Solutions Architect, Amazon Web Services
James Juniper, Solution Architect for the Geo-Community Cloud, Natural Resources Canada
A data lake is an architectural approach that allows you to store massive amounts of data into a central location, so it's readily available to be categorized, processed, analyzed and consumed by diverse groups within an organization.In this session, we will introduce the Data Lake concept and its implementation on AWS.We will explain the different roles our services play and how they fit into the Data Lake picture.
AWS Floor 28 - Building Data lake on AWSAdir Sharabi
AWS makes it easy to build and operate a highly scalable and flexible data platforms to collect, process, and analyze data so you can get timely insights and react quickly to new information. In this session we will talk about how to improve over time using your data. How do you take your everyday data and build relevant business insights, to help and continuously improve your business processes, and keep your innovation going based on your data.
Data Lake Implementation: Processing and Querying Data in Place (STG204-R1) -...Amazon Web Services
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select.
Modern data is massive, quickly evolving, unstructured, and increasingly hard to catalog and understand from multiple consumers and applications. This presentation will guide you though the best practices for designing a robust data architecture, highlightning the benefits and typical challenges of data lakes and data warehouses. We will build a scalable solution based on managed services such as Amazon Athena, AWS Glue, and AWS Lake Formation.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
In this session, we discuss architectural principles that helps simplify big data analytics.
We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on.
Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
by Andre Hass, Specialist Technical Account Manager, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Using data lakes to quench your analytics fire - AWS Summit Cape Town 2018Amazon Web Services
Speaker: Shafreen Sayyed, AWS
Level: 200
Traditional data storage and analytic tools no longer provide the agility and flexibility required to deliver relevant business insights. We are seeing more and more organisations shift to a data lake solution. This approach allows you to store massive amounts of data in a central location so its readily available to be categorized, processed, analyzed, and consumed by diverse organizational groups. In this session, we’ll assemble a data lake using services such as Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, AWS Glue and integration with Amazon Redshift Spectrum.
by Amy Che, Sr Solutions Delivery Manager AWS and Marie Yap, Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by Ben Willett, Solutions Architect, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Build Data Lakes and Analytics on AWS: Patterns & Best Practices - BDA305 - A...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Building a Modern Data Warehouse - Deep Dive on Amazon RedshiftAmazon Web Services
Osemeke Isibor, Solutions Architect, AWS
In this session, we take a deep dive on Amazon Redshift architecture and the latest performance enhancements that give you faster insights into your data. We also cover Redshift Spectrum, a feature of Redshift that enables you to analyze data across Redshift and your Amazon S3 data lake to deliver unique insights not possible by analyzing independent data silos.
Building Serverless Analytics Solutions with Amazon QuickSight (ANT391) - AWS...Amazon Web Services
Querying and analyzing big data can be complicated and expensive. It requires you to setup and manage databases, data warehouses, and business intelligence (BI) applications—all of which require time, effort, and resources. Using Amazon Athena and Amazon QuickSight, you can avoid the cost and complexity by creating a fast, scalable, and serverless cloud analytics solution without the need to invest in databases, data warehouses, complex ETL solutions, and BI applications. In this session, we demonstrate how you can build a serverless big data analytics solution using Amazon Athena and Amazon QuickSight.
Similar to AWS Data Lake: data analysis @ scale (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.