Amazon Simple Storage Service (S3) has been providing developers and IT teams with secure, durable, highly-scalable cloud storage for 10 years.
This webinar will share our insights about what we’ve seen in the past ten years of live customer environments, including backup, restore, archive, and compliance best practices as implemented by some of our largest data stores in the cloud. We will also do a quick review of the 6 different ways to transfer data into and out of AWS cloud storage, discuss how you can accelerate data transfers into and out of S3 over long distances and slow networks, and share some new developments with the AWS Import/Export Snowball appliance.
Learning Objectives:
• Best practices to keep data safe and cost effective (SIA, Versioning, Cross-region Replication, lifecycle policies)
• Quick overview on transfer services (Direct Connect, Snowball, Firehose, 3rd party partnerships, Storage Gateway)
• Deep dive on new ways to accelerate data transfers over long distances and slow networks
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Amazon CloudFront Office Hour, “Using Amazon CloudFront with Amazon S3 & AWS ...Amazon Web Services
These slides cover information from the August 9, 2016 Amazon CloudFront office hour, which includes a brief overview on Amazon Cloudfront, key benefits of the service, how to use it with Amazon S3 and AWS ELB, pricing and how to get started.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Amazon S3 and Amazon Glacier provide developers and IT teams with secure, durable, highly-scalable object storage with no minimum fees or setup costs. In this webcast, we will provide an introduction to each service, dive deep into key features of Amazon S3 and Amazon Glacier, and explore different use cases that these services optimize.
Learning Objectives:
• Business value of Amazon S3 and Amazon Glacier
• Leveraging S3 for web applications, media delivery, big data analytics and backup
• Leveraging Amazon Glacier to build cost effective archives
• Understand the life cycle management of AWS’s storage services
Who Should Attend:
• Developers, DevOps Engineers, Engineers and System Administrators
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksAmazon Web Services
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on to your object storage workloads.
Learning Objectives:
• Review best practices for to reduce costs, protect against data loss, and increase performance in Amazon S3
• Learn about new S3 storage management features that help you align storage with business needs
• Understand data security capabilities available in S3 that help protect against malicious or accidental deletion or other data loss
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Amazon CloudFront Office Hour, “Using Amazon CloudFront with Amazon S3 & AWS ...Amazon Web Services
These slides cover information from the August 9, 2016 Amazon CloudFront office hour, which includes a brief overview on Amazon Cloudfront, key benefits of the service, how to use it with Amazon S3 and AWS ELB, pricing and how to get started.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
Amazon S3 and Amazon Glacier provide developers and IT teams with secure, durable, highly-scalable object storage with no minimum fees or setup costs. In this webcast, we will provide an introduction to each service, dive deep into key features of Amazon S3 and Amazon Glacier, and explore different use cases that these services optimize.
Learning Objectives:
• Business value of Amazon S3 and Amazon Glacier
• Leveraging S3 for web applications, media delivery, big data analytics and backup
• Leveraging Amazon Glacier to build cost effective archives
• Understand the life cycle management of AWS’s storage services
Who Should Attend:
• Developers, DevOps Engineers, Engineers and System Administrators
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksAmazon Web Services
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on to your object storage workloads.
Learning Objectives:
• Review best practices for to reduce costs, protect against data loss, and increase performance in Amazon S3
• Learn about new S3 storage management features that help you align storage with business needs
• Understand data security capabilities available in S3 that help protect against malicious or accidental deletion or other data loss
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs serverless analytics on your data and hear about expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
AWS re:Invent 2016: Best practices for running enterprise workloads on AWS (E...Amazon Web Services
Fortune 500 companies are increasingly using cloud services to run enterprise workloads to improve security, increase agility, and enable scale. Learn how OpenEye is running their AWS-native platform and workflow engine to support collaboration and data sharing at large pharmaceutical companies like Pfizer. In this session, OpenEye will share cloud best practiced around security controls, cross-departmental collaboration across the enterprise, and agility at scale. Attendees will gain practical tips for using AWS in the enterprise and healthcare industries.
AWS re:Invent 2016: How Netflix Achieves Email Delivery at Global Scale with ...Amazon Web Services
Companies around the world are using Amazon Simple Email Service (Amazon SES) to send millions of emails to their customers every day, and scaling linearly, at cost. In this session, you learn how to use the scalable and reliable infrastructure of Amazon SES. In addition, Netflix talks about their advanced Messaging program, their challenges, how SES helped them with their goals, and how they architected their solution for global scale and deliverability.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAmazon Web Services
Want to get ramped up on how to use Amazon's big data services and launch your first big data application on AWS?
Join us on a journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
In this session we review architecture design patterns for big data solutions on AWS, and give you access to everything you need so that you can rebuild and customize the application yourself.
Migrate from Oracle to Amazon Aurora using AWS Schema Conversion Tool & AWS D...Amazon Web Services
• Understand the issues with commercial database pricing and licensing.
• Learn about the benefits of Amazon Aurora for improving performance and decreasing costs.
• See how AWS Database Migration Service helps with your migration.
• See how AWS Schema Conversion Tool makes conversions simple and quick.
If you’re looking to improve application performance and availability and decrease database costs, it’s time to replace your expensive Oracle databases with an open-source compatible solution. Amazon Aurora is a MySQL-compatible relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. You'll learn how to use the AWS Database Migration Service to migrate your data with minimal downtime, and how the AWS Schema Conversion Tool converts your Oracle schemas and procedural code into Amazon Aurora. We’ll follow with a quick demo of the entire process.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
AWS re:Invent 2016: Turbocharge Your Microsoft .NET Developments with AWS (DE...Amazon Web Services
In this session, you will discover how to integrate the AWS developer tools into your development process. We will demonstrate how to leverage AWS services, the .NET SDK, and the Visual Studio Toolkit to simplify and streamline your development processes. This session is targeted at development teams using Microsoft Visual Studio and the Microsoft ecosystem of products. Most of the presentation will be in Visual Studio.
Deep Dive on MySQL Databases on AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about MySQL deployment options on AWS
- Learn how to maintain high availability and security of your data
- Learn how to migrate MySQL databases to Amazon RDS
Learn how the Blue/Green Deployment methodology combined with AWS tools and services can help reduce the risks associated with software deployment. We will illustrate common patterns and highlight ways deployment risks are mitigated by each pattern. Topics will include how services like AWS CloudFormation, AWS Elastic Beanstalk, Amazon EC2 Container Service, Amazon Route53, Auto Scaling and Elastic Load Balancing can help automate deployment. We will also address how to effectively manage deployments in the context of data model and schema changes. Learn how you can adopt blue/green for your software release processes in a cost-effective and low-risk way.
AWS and its partners offer a wide range of tools and features to help you to meet your security objectives. These tools mirror the familiar controls you deploy within your on-premises environments. AWS provides security-specific tools and features across network security, configuration management, access control and data security. In addition, AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment. In this session, you will get introduced to the range of security tools and features that AWS offers, and the latest security innovations coming from AWS.
AWS re:Invent 2016: Getting Started with the Hybrid Cloud: Enterprise Backup ...Amazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing Backup ad Recovery processes to achieve fast, simple wins that demonstrate the scale and flexibility of cloud services for storage. Services mentioned: S3, Glacier, Snowball, 3rd party partners, Storage Gateway, and cloud data migration services.
(STG406) Using S3 to Build and Scale an Unlimited Storage ServiceAmazon Web Services
Amazon Cloud Drive's plans to provide a low cost, unlimited storage service presented a major engineering challenge. In this session, you learn how the Amazon Cloud Drive team designed and optimized the storage back-end, Amazon S3, to handle millions of users while containing infrastructure costs. In this session, the lead engineers share details of how they built the service for massive scale, and the regular steps they take to increase performance and efficiency. They also describe proven techniques for scaling and optimization, learned from experience.
Not just for archiving or compliance use cases, Amazon Glacier accommodates customers simply looking to replace their on-premises long term storage with a cost efficient, durable, cloud option, from which they can easily and quickly access their data when they need to. This session will introduce newly launched features for Amazon Glacier, review the current service feature set, and share the global data center shut down and storage strategy for Sony DADC New Media Solutions (NMS). NMS is Sony’s digital servicing division providing global digital distribution, linear playout and white label OTT/Commerce solutions for clients such as BBC Worldwide, NBCUniversal, Sony Playstation, and Funimation Entertainment.
Hear from Andy Shenkler, NMS’s Chief Technology and Solutions Officer as he talks about the key factors that drove the organization’s decision to move away from tape and go towards the cloud and out of the infrastructure business overall. Learn more about the impact and operational practices inside a world class digital supply chain as they were able to move over 20 petabytes of data, over 1M hours of video, to the cloud and never looked back.
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
• Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
• Learn about the latest generation of Amazon EC2 Instances
• Learn best practices around instance selection to optimize performance
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs serverless analytics on your data and hear about expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
AWS re:Invent 2016: Best practices for running enterprise workloads on AWS (E...Amazon Web Services
Fortune 500 companies are increasingly using cloud services to run enterprise workloads to improve security, increase agility, and enable scale. Learn how OpenEye is running their AWS-native platform and workflow engine to support collaboration and data sharing at large pharmaceutical companies like Pfizer. In this session, OpenEye will share cloud best practiced around security controls, cross-departmental collaboration across the enterprise, and agility at scale. Attendees will gain practical tips for using AWS in the enterprise and healthcare industries.
AWS re:Invent 2016: How Netflix Achieves Email Delivery at Global Scale with ...Amazon Web Services
Companies around the world are using Amazon Simple Email Service (Amazon SES) to send millions of emails to their customers every day, and scaling linearly, at cost. In this session, you learn how to use the scalable and reliable infrastructure of Amazon SES. In addition, Netflix talks about their advanced Messaging program, their challenges, how SES helped them with their goals, and how they architected their solution for global scale and deliverability.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAmazon Web Services
Want to get ramped up on how to use Amazon's big data services and launch your first big data application on AWS?
Join us on a journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
In this session we review architecture design patterns for big data solutions on AWS, and give you access to everything you need so that you can rebuild and customize the application yourself.
Migrate from Oracle to Amazon Aurora using AWS Schema Conversion Tool & AWS D...Amazon Web Services
• Understand the issues with commercial database pricing and licensing.
• Learn about the benefits of Amazon Aurora for improving performance and decreasing costs.
• See how AWS Database Migration Service helps with your migration.
• See how AWS Schema Conversion Tool makes conversions simple and quick.
If you’re looking to improve application performance and availability and decrease database costs, it’s time to replace your expensive Oracle databases with an open-source compatible solution. Amazon Aurora is a MySQL-compatible relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. You'll learn how to use the AWS Database Migration Service to migrate your data with minimal downtime, and how the AWS Schema Conversion Tool converts your Oracle schemas and procedural code into Amazon Aurora. We’ll follow with a quick demo of the entire process.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
AWS re:Invent 2016: Turbocharge Your Microsoft .NET Developments with AWS (DE...Amazon Web Services
In this session, you will discover how to integrate the AWS developer tools into your development process. We will demonstrate how to leverage AWS services, the .NET SDK, and the Visual Studio Toolkit to simplify and streamline your development processes. This session is targeted at development teams using Microsoft Visual Studio and the Microsoft ecosystem of products. Most of the presentation will be in Visual Studio.
Deep Dive on MySQL Databases on AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about MySQL deployment options on AWS
- Learn how to maintain high availability and security of your data
- Learn how to migrate MySQL databases to Amazon RDS
Learn how the Blue/Green Deployment methodology combined with AWS tools and services can help reduce the risks associated with software deployment. We will illustrate common patterns and highlight ways deployment risks are mitigated by each pattern. Topics will include how services like AWS CloudFormation, AWS Elastic Beanstalk, Amazon EC2 Container Service, Amazon Route53, Auto Scaling and Elastic Load Balancing can help automate deployment. We will also address how to effectively manage deployments in the context of data model and schema changes. Learn how you can adopt blue/green for your software release processes in a cost-effective and low-risk way.
AWS and its partners offer a wide range of tools and features to help you to meet your security objectives. These tools mirror the familiar controls you deploy within your on-premises environments. AWS provides security-specific tools and features across network security, configuration management, access control and data security. In addition, AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment. In this session, you will get introduced to the range of security tools and features that AWS offers, and the latest security innovations coming from AWS.
AWS re:Invent 2016: Getting Started with the Hybrid Cloud: Enterprise Backup ...Amazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing Backup ad Recovery processes to achieve fast, simple wins that demonstrate the scale and flexibility of cloud services for storage. Services mentioned: S3, Glacier, Snowball, 3rd party partners, Storage Gateway, and cloud data migration services.
(STG406) Using S3 to Build and Scale an Unlimited Storage ServiceAmazon Web Services
Amazon Cloud Drive's plans to provide a low cost, unlimited storage service presented a major engineering challenge. In this session, you learn how the Amazon Cloud Drive team designed and optimized the storage back-end, Amazon S3, to handle millions of users while containing infrastructure costs. In this session, the lead engineers share details of how they built the service for massive scale, and the regular steps they take to increase performance and efficiency. They also describe proven techniques for scaling and optimization, learned from experience.
Not just for archiving or compliance use cases, Amazon Glacier accommodates customers simply looking to replace their on-premises long term storage with a cost efficient, durable, cloud option, from which they can easily and quickly access their data when they need to. This session will introduce newly launched features for Amazon Glacier, review the current service feature set, and share the global data center shut down and storage strategy for Sony DADC New Media Solutions (NMS). NMS is Sony’s digital servicing division providing global digital distribution, linear playout and white label OTT/Commerce solutions for clients such as BBC Worldwide, NBCUniversal, Sony Playstation, and Funimation Entertainment.
Hear from Andy Shenkler, NMS’s Chief Technology and Solutions Officer as he talks about the key factors that drove the organization’s decision to move away from tape and go towards the cloud and out of the infrastructure business overall. Learn more about the impact and operational practices inside a world class digital supply chain as they were able to move over 20 petabytes of data, over 1M hours of video, to the cloud and never looked back.
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
• Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
• Learn about the latest generation of Amazon EC2 Instances
• Learn best practices around instance selection to optimize performance
Undicesima lezione del corso sul Web 2.0 tenuto all'Università di Milano Bicocca.
Per info vedi <A HREF:"http://www.corsoweb20polillo.blogspot.com">www.corsoweb20polillo.blogspot.com </A>
Valutazione degli aspetti legati alla privacy nel mondo dei Social Network, con particolare attenzione a Facebook e all’esperta del settore Danah Boyd.
Progetto realizzato da Vincenzo Bellisario e Matteo Serratoni, studenti del corso di Laurea Magistrale in Teoria e Tecnologia della Comunicazione presso l'Università degli Studi di Milano-Bicocca.
Identità emergenti nei pubblici connessi italianiAgnese Vellar
Nell’età tardo moderna i differenti livelli della società, dalle identità individuali alle culture, devono essere concepiti come processi emergenti dalle pratiche riflessive di costruzione del sé. Attraverso uno sguardo etnografico è possibile osservare i percorsi biografici degli attori sociali che, attraversando i flussi culturali globali, producono nuove strutture sociali e nuove località. In particolare, dalla partecipazione giovanile all’interno dei media sociali (forum, chat, siti di Social Network) stanno emergendo i “pubblici connessi”, intesi sia come spazi sociali digitali che come comunità immaginate.
In questo articolo l’autrice ripercorre le riflessioni sul rapporto tra media, identità e globalizzazione, in particolare focalizzandosi sulle culture di fan come esempio di sfere pubbliche translocali ad identità debole, in cui spettatori appassionati (fan) interagiscono nei pubblici connessi attorno ad un culto mediale, dando vita a comunità di pratica.
L’autrice presenta quindi un’etnografia multi-situata del fandom telefilmico italiano con un caso di studio nella comunità di fansubbing ::Italian Subs Addicted::. I gruppi di fan pubblicano prodotti amatoriali nei media sociali traducendo e adattando i contenuti distribuiti dalle corporation dell’intrattenimento. I fan italiani dunque cooperano e collaborano all’interno dei pubblici connessi partecipando alla costruzione di un collettivismo di rete attorno a cui emerge una comunità immaginata transnazionale.
Come learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Social Media - Introduzione al Corso [a.a. 2014-2015] - UniToAgnese Vellar
Introduzione al corso per gli studenti delle Lauree Magistrali di Comunicazione Pubblica e Politica e Comunicazione ICT e Media - Università degli Studi di Torino http://goo.gl/B6vE6M
Ways to stay connected: Harnessing, managing, and preventing context collapse...Stefanie Duguay
Social media sites, such as Facebook, present the potential for people to organise connections with acquaintances from all walks of life within a single site. This can lead to context collapse, a flattening of the boundaries that generally separate audiences for self-expression. Drawing on literature about young people’s social media use and my research with LGBTQ early adults, I will discuss how context collapse is experienced as an event through which individuals can intentionally redefine themselves across audiences or manage identity expressions received by unintended audiences. Possible strategies for reinstating contexts on social media will also be explored in this presentation.
Slides dalle lezioni del corso di Strumenti e applicazioni del Web per il corso di laurea magistrale in Teoria e tecnologia della comunicazione - Università di Milano Bicocca (prof.R.Polillo) - Lezione del 6 maggio 2014
In this session, we’ll expand on the S3 re:Invent deep-dive session with a hands-on workshop on advanced S3 features and storage management capabilities. We’ll have AWS S3 and Glacier experts on-hand to deep-dive on S3 architecture, performance & scalability optimization, how to analyze your content and leverage storage tiers (S3 Standard, S3 Standard Infrequent Access, Glacier) to balance cost and SLAs, security considerations, replication with Cross Region Replication (CRR), versioning for data protection and more.
In the hands-on lab, we’ll walk through a customer scenario: architecting a high-performance infrastructure for consumer applications. In the scenario, we’ll use sample data sets on S3, analyze object retrieval patterns and design a complete solution using many of the features S3 offers including migrating objects to an appropriate tier.
Prerequisites:
- Participants should have an AWS account established and available for use during the workshop.
- Please bring your own laptop.
This session drills deep into the Amazon S3 technical best practices that help you maximize storage performance for your use case. We provide real-world examples and discuss the impact of object naming conventions and parallelism on Amazon S3 performance, and describe the best practices for multipart uploads and byte-range downloads.
We built event-driven user interfaces for decades. What about bringing the same approach to mobile, web, and IoT backend applications? You have to understand how data flows and what is the propagation of changes, using reactive programming techniques. You can focus on the core functionalities to build and the relationships among the resources you use. Your application behaves similarly to a “spreadsheet”, where depending resources are updated automatically when something “happens”, and is decomposed into scalable microservices without having to manage the infrastructure. The resulting architecture is efficient and cost effective to run on AWS and managing availability, scalability and security becomes part of the implementation itself.
A dive deep into the AWS IoT service that was announced at AWS re:Invent in October. We will cover the components of the AWS IoT platform, demonstrate the AWS IoT Console and command line experience and the client-side SDKs that AWS provides to help developers build rich applications for their devices, whilst removing the heavy lifting associated with creating a scalable, secure and reliable set of cloud services to support these applications.
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on your object storage workloads.
Best Practices for Building a Data Lake with Amazon S3 - August 2016 Monthly ...Amazon Web Services
Uncovering new, valuable insights from big data requires organizations to collect, store, and analyze increasing volumes of data from multiple, often disparate sources at disparate points in time. This makes it difficult to handle big data with data warehouses or relational database management systems alone. A Data Lake allows you to store massive amounts of data in its original form, without the need to enforce a predefined schema, enabling a far more agile and flexible architecture, which makes it easier to gain new types of analytical insights from your data.
Learning Objectives:
• Introduce key architectural concepts to build a Data Lake using Amazon S3 as the storage layer
• Explore storage options and best practices to build your Data Lake on AWS
• Learn how AWS can help enable a Data Lake architecture
• Understand some of the key architectural considerations when building a Data Lake
• Hear some important Data Lake implementation considerations when using Amazon S3 as your Data Lake
Introduction to key architectural concepts to build a data lake using Amazon S3 as the storage layer and making this data available for processing with a broad set of analytic options including Amazon EMR and open source frameworks such as Apache Hadoop, Spark, Presto, and more.
Learning Objectives:
- Review best practices for to reduce costs, protect against data loss, and increase performance in Amazon S3
- Learn about new S3 storage management features that help you align storage with business needs
- Understand data security capabilities available in S3 that help protect against malicious or accidental deletion or other data loss
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on to your object storage workloads.
Deep Dive On Object Storage: Amazon S3 and Amazon Glacier - AWS PS Summit Can...Amazon Web Services
Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. Discover how AWS customers have built solutions that turn their data into a strategic asset.
Speakers: Ben Thurgood. Solutions Architect. Amazon Web Services with Timothy Eckersley, Enterprise Architect, NSW Pathology
Level: 300
In this session, we cover some of the recently announced features. We then talk about using S3 event sources, the various S3 storage classes, cross-region replication, and VPC endpoints.
February 2016 Webinar Series - Use AWS Cloud Storage as the Foundation for Hy...Amazon Web Services
Re-architecting applications for the cloud can be disruptive to existing on-premises solutions. One way to ease this transition is to adopt a hybrid approach to cloud.
This webinar will help you understand how to relate traditional on-premises storage infrastructures to the cloud, highlight use cases for easy wins, how to navigate architectural decisions and best practices for hybrid designs.
Learning Objectives:
Learn how to decide between object, file and block storage, the key benefits and differentiators for each, and when to apply them in hybrid models
Who Should Attend:
Application Developers, enterprise architects and storage and backup and managers familiar with traditional on-prem storage offerings
Deep Dive on Object Storage: Amazon S3 and Amazon Glacier | AWS Public Sector...Amazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide - with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs "query in place" analytics on your data and hear about the new expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts. Learn More: https://aws.amazon.com/government-education/
AWS re:Invent 2016: How Amazon S3 Storage Management Helps Optimize Storage a...Amazon Web Services
Customers using Amazon S3 at large scale benefit greatly from storage management features. Storage lifecycle policies help them reduce storage costs. Cross-region replication makes it easier to copy data between AWS regions for compliance or disaster recovery. Event notifications allow automatic initiation of processes on objects as they arrive, or capture information about objects and log it for security purposes. In this session, you'll learn about these features, and also several new storage management features in Amazon S3 that give users unmatched visibility into what data they are storing and how that data is being used. These new features make it simpler to analyze usage by users, apps, or organizations, to highlight anomalies, and to optimize business process workflows. They also help identify opportunities to reduce costs, improve performance, and archive infrequently used data. In addition, they can provide insight into who is accessing data stored in S3. As part of this talk, AWS customer Pinterest shows how they have been able to leverage many of the new S3 storage management features to reduce their storage costs significantly by moving a large amount of their data from S3 Standard to S3 Standard – Infrequent Access storage.
This webinar discussed the use of the AWS Cloud as a disaster recovery (DR) environment. It also explored how the architectural approaches to DR in the AWS Cloud makes DR and BCP a great scenario for familiarising yourself with AWS before moving on to production application deployments in the cloud.
Storage Data Management: Tools and Templates to Seamlessly Automate and Optim...Amazon Web Services
by Robbie Wright, Sr. Product Marketing Manager AWS
Learn about the features supported by AWS storage services, such as object tagging, storage class analysis, inventorying, and monitoring. These tools can help automate data lifecycle policies for optimal and cost-effective storage management, provide detailed insights into usage across the entire enterprise, and limit access to certain accounts.
A brief introduction of different storage options available on AWS platform. And what is the value proposition of AWS in the Disaster Recovery (DR) scenario.
Deep Dive on Amazon Glacier Covering New Retrieval Features - December 2016 M...Amazon Web Services
With Expedited, Standard, and Bulk retrievals, you can leverage Amazon Glacier’s extremely low-cost storage service to support the full spectrum of archive use cases. These range from deep archives that are never retrieved to active workloads with minute-level access, such as media broadcasting, to petabyte-scale content distribution or big data analytics use cases. This session will dive deep into the recently launched retrieval features, review Amazon Glacier’s current feature set, and share use cases from customers leveraging Glacier’s latest features.
Learning Objectives:
• Dive deep on Amazon Glacier and the new retrieval features
• Learn about the benefits of Amazon Glacier and the new retrieval features
• Learn about the different use cases
• Learn how to get started using Amazon Glacier
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. Hear about Amazon Glacier and new capabilities to get access to your data faster with expedited retrievals. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
Learn how Maxwell Health Protects its MongoDB Workloads on AWSAmazon Web Services
Maxwell Health, a software-as-a-service healthcare benefits management provider, needed to meet recovery SLAs for MongoDB workloads on Amazon Web Services (AWS). The company turned to Rubrik Datos IO for a modern, scalable, cloud-native backup and recovery solution. Within minutes, Maxwell Health had launched Rubrik Datos IO RecoverX to protect its AWS environment. RecoverX helped Maxwell meet strict backup and recovery SLAs, simplify MongoDB data protection efforts, and save backup storage costs for Amazon S3.
Join our webinar to learn how Rubrik Datos IO enabled Maxwell Health to lower its recovery time by 30 percent and reduce storage costs by 90 percent for its MongoDB backups on AWS.
Best Practices for Protecting Cloud Workloads - November 2016 Webinar SeriesAmazon Web Services
Traditional backup software works for on-premises workloads, but protecting the data for workloads running in the cloud is a new game. Backup windows may be non-existent, data may be scattered across geographies and platforms, and there may simply be too much to effectively traverse with traditional methods. Protecting cloud workload data requires some adjustments to your thinking. Join our storage experts to learn more about best practices for preventing loss, rolling back to recovery points, and fitting into backup windows. We will cover protection features and design considerations for protecting data with S3, Glacier, EBS and EFS.
Learning Objectives:
• Learn how to design for recovery points and recovery times using the native AWS storage tools for file, block and object storage
On premises compliance archival systems are expensive to maintain, are isolated IT silos, have very inefficient utilization, and are poorly protected from disaster. In AWS, we provide better infrastructure durability, better physical security, lower cost, and richer features for data access. Consider that many data lakes contain medical records, trading records, and other regulated content. The industry now has the opportunity to execute rich analytics against their data while retaining regulatory compliance.
Similar to AWS April 2016 Webinar Series - S3 Best Practices - A Decade of Field Experience (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
4. Cross-region
replication
- Amazon CloudWatch
metrics for Amazon S3
- AWS CloudTrail support
VPC endpoint
for Amazon S3
Amazon S3 bucket
limit increase
Event notifications
Read-after-write
consistency in all regions
Innovation for Amazon S3
6. Standard
Active data Archive dataInfrequently accessed data
Standard - Infrequent Access Amazon Glacier
Choice of storage classes on Amazon S3
7. File sync and share
+
consumer file
storage
Backup and archive +
disaster recovery
Long retained
data
Some use cases have different requirements
8. 11 9s of durability Designed for
99.9% availability
Durable Available
Same throughput as
Amazon S3 Standard storage
High performance
• Server-side encryption
• Use your encryption keys
• KMS-managed encryption keys
Secure
• Lifecycle management
• Versioning
• Event notifications
• Metrics
Integrated
• No impact on user experience
• Simple REST API
• Single bucket
Easy to use
Standard-Infrequent Access storage
23. Amazon S3 as your persistent data store
Separate compute and storage
Resize and shut down Amazon EMR
clusters with no data loss
Point multiple Amazon EMR clusters at the
same data in Amazon S3
EMR
EMR
Amazon
S3
24. EMRFS makes it easier to use Amazon S3
Read-after-write consistency
Very fast list operations
Error handling options
Support for Amazon S3 encryption
Transparent to applications: s3://
31. Lifecycle policies
Automatic tiering and cost controls
Includes two possible actions:
Transition: archives to Standard-IA or Amazon
Glacier after specified time
Expiration: deletes objects after specified time
Allows for actions to be combined
Set policies at the prefix level
Lifecycle policies
32. Standard-Infrequent Access storage
Transition Standard to Standard-IA
Transition Standard-IA to Amazon Glacier
storage
Expiration lifecycle policy
Versioning support
Directly PUT to Standard-IA
Integrated: Lifecycle management
Standard - Infrequent Access
35. Versioning S3 buckets
Protects from accidental overwrites and
deletes
New version with every upload
Easy retrieval of deleted objects and roll
back
Three states of an Amazon S3 bucket
Default – Unversioned
Versioning-enabled
Versioning-suspended
Versioning
Best Practice
37. Expired object delete marker policy
Deleting a versioned object makes a
delete marker the current version of the
object
No storage charge for delete marker
Removing delete marker can improve
list performance
Lifecycle policy to automatically remove
the current version delete marker when
previous versions of the object no
longer exist
Expired object delete
marker
38. Example lifecycle policy to remove current versions
<LifecycleConfiguration>
<Rule>
...
<Expiration>
<Days>60</Days>
</Expiration>
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>
Leverage lifecycle to expire current
and non-current versions
S3 Lifecycle will automatically remove
any expired object delete markers
Expired object delete marker policy
39. Example lifecycle policy for non-current version expiration
Lifecycle configuration with
NoncurrentVersionExpiration action removes
all the noncurrent versions,
<LifecycleConfiguration>
<Rule>
...
<Expiration>
<ExpiredObjectDeleteMarker>true</ExpiredObjectDeleteMarker>
</Expiration>
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>
By setting the ExpiredObjectDeleteMarker
element to true in the Expiration action, you
direct Amazon S3 to remove expired object
delete markers.
Expired object delete marker policy
41. Tip: Restricting deletes
Bucket policies can restrict deletes
For additional security, enable MFA (multi-factor
authentication) delete, which requires additional
authentication to:
Change the versioning state of your bucket
Permanently delete an object version
MFA delete requires both your security credentials
and a code from an approved authentication device
Best Practice
43. Parallelizing PUTs with multipart uploads
Increase aggregate throughput by
parallelizing PUTs on high-bandwidth
networks
Move the bottleneck to the network
where it belongs
Increase resiliency to network errors;
fewer large restarts on error-prone
networks
Best Practice
44. Multipart upload provides parallelism
• Allows faster, more flexible uploads
• Allows you to upload a single object as a set of parts
• Upon upload, Amazon S3 then presents all parts as
a single object
• Enables parallel uploads, pausing and resuming
an object upload and starting uploads before
you know the total object size
45. Incomplete multipart upload expiration policy
Multipart upload feature improves
PUT performance
Partial upload does not appear in
bucket list
Partial upload does incur storage
charges
Set a lifecycle policy to automatically
expire incomplete multipart uploads
after a predefined number of days
Incomplete multipart
upload expiration
46. Example lifecycle policy
Abort incomplete multipart
uploads seven days after
initiation
<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix>SomeKeyPrefix/</Prefix>
<Status>rule-status</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
Incomplete multipart upload expiration policy
47. Parallelize your GETs
Use range-based GETs to get
multithreaded performance when
downloading objects
Compensates for unreliable networks
Benefits of multithreaded parallelismparts!
Best Practice
48. Parallelizing LIST
Parallelize LIST when you need a
sequential list of your keys
Secondary index to get a faster
alternative to LIST
Sorting by metadata
Search ability
Objects by timestamp
Best Practice
49. SSL best practices to optimize performance
Use the SDKs!!
EC2 instance types
AES-NI hardware acceleration (cat /proc/cpuinfo)
Threads can work against you (finite network
capacity)
Timeouts
Connection pooling
Perform keep-alives to avoid handshake
Best Practice
51. Distributing key names
Add randomness to the beginning of the key name…
<my_bucket>/521335461-2013_11_13.jpg
<my_bucket>/465330151-2013_11_13.jpg
<my_bucket>/987331160-2013_11_13.jpg
<my_bucket>/465765461-2013_11_13.jpg
<my_bucket>/125631151-2013_11_13.jpg
<my_bucket>/934563160-2013_11_13.jpg
<my_bucket>/532132341-2013_11_13.jpg
<my_bucket>/565437681-2013_11_13.jpg
<my_bucket>/234567460-2013_11_13.jpg
<my_bucket>/456767561-2013_11_13.jpg
<my_bucket>/345565651-2013_11_13.jpg
<my_bucket>/431345660-2013_11_13.jpg
52. Other techniques for distributing key names
Store objects as a hash of their name
add the original name as metadata
“deadmau5_mix.mp3”
0aa316fb000eae52921aab1b4697424958a53ad9
prepend key name with short hash
0aa3-deadmau5_mix.mp3
(reverse)
5321354831-deadmau5_mix.mp3
Best Practice
53. S3 Standard-Infrequent Access
Using big data on S3 for analysis
S3 management policies
Versioning for S3
Best practices and performance optimization for S3
Recap