This document discusses content negotiation in HTTP, which allows a client and server to negotiate the optimal representation of a resource based on factors like the client's capabilities and preferences. It describes how content negotiation works using request headers like Accept, Accept-Encoding, and Content-Type to indicate the desired response format. Content negotiation enables versioning of APIs by media type rather than URL, keeping the resource identity clean.
GraphQL is widely adapted. As it becomes more popular, there are security considerations for hosting GraphQL services. In this, I cover a set of good practices and ideas that can be used to protect this exciting technology
Kubernetes Security with Calico and Open Policy AgentCloudOps2005
Ray Kao and Kevin Harris from Microsoft presenting ‘Kubernetes Security with Calico and Open Policy Agent’ at the spring 2019 Kubernetes and Cloud Native meetup in Toronto.
We are very glad to see the Apache Pulsar community has successfully released the wonderful 2.4.0 release after a few months of accumulated hard works. It is a great milestone for this fast-growing project and the whole Pulsar community. Sijie shares a selection of some of the most interesting and important features the community added to this new release.
An overview of securing Hadoop. Content primarily by Balaji Ganesan, one of the leaders of the Apache Argus project. Presented on Sept 4, 2014 at the Toronto Hadoop User Group by Adam Muise.
Minerva is a storage plugin of Drill that connects IPFS's decentralized storage and Drill's flexible query engine. Any data file stored on IPFS can be easily accessed from Drill's query interface, just like a file stored on a local disk.
Visit https://github.com/bdchain/Minerva to learn more and try it out!
GraphQL is widely adapted. As it becomes more popular, there are security considerations for hosting GraphQL services. In this, I cover a set of good practices and ideas that can be used to protect this exciting technology
Kubernetes Security with Calico and Open Policy AgentCloudOps2005
Ray Kao and Kevin Harris from Microsoft presenting ‘Kubernetes Security with Calico and Open Policy Agent’ at the spring 2019 Kubernetes and Cloud Native meetup in Toronto.
We are very glad to see the Apache Pulsar community has successfully released the wonderful 2.4.0 release after a few months of accumulated hard works. It is a great milestone for this fast-growing project and the whole Pulsar community. Sijie shares a selection of some of the most interesting and important features the community added to this new release.
An overview of securing Hadoop. Content primarily by Balaji Ganesan, one of the leaders of the Apache Argus project. Presented on Sept 4, 2014 at the Toronto Hadoop User Group by Adam Muise.
Minerva is a storage plugin of Drill that connects IPFS's decentralized storage and Drill's flexible query engine. Any data file stored on IPFS can be easily accessed from Drill's query interface, just like a file stored on a local disk.
Visit https://github.com/bdchain/Minerva to learn more and try it out!
Both Apache Pulsar and Apache Flink share a similar view on how the data and the computation level of an application can be “streaming-first” with batch as a special case streaming. With Apache Pulsar’s Segmented-Stream storage and Apache Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies to provide elastic data processing at massive scale, and build a real streaming warehouse.
In this talk, Sijie Guo from Apache Pulsar community will given an overview of Apache Pulsar and how it provides the unified data view to fully leverage Apache Flink unified computation runtime for elastic data processing. He will share the latest integrations between Apache Pulsar and Apache Flink, especially around effectively-once processing and schema integration.
It introduces and illustrates use cases, benefits and problems for Kerberos deployment on Hadoop; how Token support and TokenPreauth can help solve the problems. It also briefly introduces Haox project, a Java client library for Kerberos.
Elastic Data Processing with Apache Flink and Apache PulsarStreamNative
More and more applications are using Flink for low-latency data processing. Flink unifies batch and stream processing using one computation engine. However in reality, in order to really unify batch and stream processing, it requires a data system offers one unified data representation for both batch and streaming data. Nowadays, streaming data is typically stored in a log storage or messaging system, while batch data is stored in distributed filesystem and object stores. That means that data scientists still need write two different computing jobs to access same data stored in different data systems.
Apache Pulsar is the next generation messaging and streaming data system. It was originally built at Yahoo, and has graduated from Apache Incubator and become a Top-Level-Project. Pulsar separates messaging serving and data storage into two layers. Such layered architecture provides high throughput and low-latency while ensuring high availability and scalability. Pulsar’s segment centric storage design along with layered architecture makes Pulsar a perfect unbounded streaming data system, which can well fit into Flink’s computation model.
In this talk, Sijie Guo from Apache Pulsar PMC, will introduce Pulsar and its layered architecture and segment-centric storage, detailing how this architecture can well integrate with Flink to provide elastic unified batch and stream processing.
Dynamic Authorization & Policy Control for Docker EnvironmentsTorin Sandall
How do you enable rapid deployment of innovative applications on top of Docker containers while still satisfying strict requirements from your InfoSec and compliance departments? The Open Policy Agent (OPA), an open-source tool, enables you to update and enforce policies without slowing down developers or modifying application code. In this talk, Justin Cormack (Security Engineer at Docker) and Torin Sandall (Co-founder of the OPA project) will show how you can leverage the integrations between Docker and OPA to enforce fine-grained policies in your organization's container platform while still allowing your developers to move quickly. This talk is targeted at engineers building and operating container platforms who are interested in security and policy enforcement. The audience can expect to take aware fresh ideas about how to enforce fine-grained security policies across their container platform.
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content.
In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you.
More specifically, we'll cover:
Different techniques for measuring CDN performance
Differentiating between network footprint and object delivery performance
Choosing the right content to test
Core metrics to focus on and how each impacts real traffic
Understanding cache hit ratio, why it can be misleading, and how to measure for it
Building a Messaging Solutions for OVHcloud with Apache Pulsar_Pierre ZembStreamNative
OVHcloud is the biggest European cloud provider. From dedicated servers to Managed Kubernetes, from VMware® based Hosted Private Cloud to OpenStack-based Public Cloud, we have over 1.4 million customers worldwide.
Internally, we have been running Apache Kafka for years, and despite all the skills obtained operating multiples clusters with millions of messages per second, we decided to shift and build the foundation of our 'topic-as-a-service' product called ioStream on Apache Pulsar.
In this talk, you will have the insights of why we decided to use Apache Pulsar instead of Apache Kafka as the core of ioStream. We will tell you our journey to use Apache Pulsar, from our deployments to the management, what did work and what did not.
Hadoop and Kerberos: the Madness Beyond the Gate: January 2016 editionSteve Loughran
An update of the "Hadoop and Kerberos: the Madness Beyond the Gate" talk, covering recent work "the Fix Kerberos" JIRA and its first deliverable: KDiag
In this day and age, maintaining privacy throughout our electronic communications is absolutely necessary. Creating user accounts, and not exposing your MongoDB environment to the wider internet, are basic concepts that have been missed in the past. Once that has been addressed, individuals and organizations interested in becoming PCI compliant must turn to securing their data through encryption. With MongoDB, we have two options for encryption: at rest and transport encryption.
Securing Hadoop's REST APIs with Apache Knox Gateway Hadoop Summit June 6th, ...Kevin Minder
Securing Hadoop's REST APIs with Apache Knox Gateway
Presented at Hadoop Summit on June 6th, 2014
Describes the overall roles the Apache Knox Gateway plays in Hadoop security and briefly covers its primary features.
The MongoDB Spark Connector integrates MongoDB and Apache Spark, providing users with the ability to process data in MongoDB with the massive parallelism of Spark. The connector gives users access to Spark's streaming capabilities, machine learning libraries, and interactive processing through the Spark shell, Dataframes and Datasets. We'll take a tour of the connector with a focus on practical use of the connector, and run a demo using both Spark and MongoDB for data processing.
Practical Elasticsearch - real world use casesItamar
Elasticsearch - a search and real-time analytics server based on Apache Lucene - is gaining a lot of popularity lately, and is being used world-wide to power many sophisticated systems. While many use it for the "standard" stuff (that is, simple full-text search and real-time log analysis), there are some really interesting usage patterns that can prove useful in many real-world scenarios. In this talk we will briefly talk about Elasticsearch and its common use-cases, and then showcase some less common use-cases leveraging Elasticsearch in an interesting and often times innovating ways.
Redundancy and high availability are the basis for all production deployments. Database systems with large data sets or high throughput applications can challenge the capacity of a single server like CPU for high query rates or RAM for large working sets. Adding more CPU and RAM for vertical scaling is limited. Systems need horizontal scaling by distributing data across multiple servers. MongoDB supports horizontal scaling through sharding.
Redundancy and high availability are the basis for all production deployments. With MongoDB this can be achieved by deploying a replica set. In this talk, we'll explore how MongoDB replication works and what the components are of a replica set. Using examples of wrong deployment configurations, we will highlight how to properly run replica sets in production, whether it comes to on-premise deployment or in the cloud.
Learn how to build new classes of sophisticated, real-time analytics by combining Apache Spark, the industry's leading data processing engine, with MongoDB, the industry’s fastest growing database.
We live in a world of “big data.” But it isn’t just the data itself that is valuable – it’s the insight it can generate. How quickly an organization can unlock and act on that insight has become a major source of competitive advantage. Collecting data in operational systems and then relying on nightly batch extract, transform, load (ETL) processes to update the enterprise data warehouse (EDW) is no longer sufficient.
In this live session, we show you how MongoDB and Spark work together and provide examples using the new Spark Connector for MongoDB.
This session was sponsored by Stratio & Paradigma.
With the popularization of github, the lack of security control in commits has exposed several sensitive data that can compromise both companies and ordinary users. To make the search easier, I've created a script to automate and collect the results. This script is in version 2.0 with some implementations and improvements in the code, I will demonstrate how to perform the collection and how this can cause a great impact.
Exploring the replication and sharding in MongoDBIgor Donchovski
Redundancy and high availability are the basis for all production deployments. Database systems with large data sets or high throughput applications can challenge the capacity of a single server like CPU for high query rates or RAM for large working sets. Adding more CPU and RAM for vertical scaling is limited. Systems need horizontal scaling by distributing data across multiple servers. MongoDB supports horizontal scaling through sharding. Each shard consist of replica set that provides Redundancy and high availability.
Dev Jumpstart: Build Your First App with MongoDBMongoDB
New to MongoDB? This talk will introduce the philosophy and features of MongoDB. We’ll discuss the benefits of the document-based data model that MongoDB offers by walking through how one can build a simple app to store books. We’ll cover inserting, updating, and querying the database of books. This session will jumpstart your knowledge of MongoDB development, providing you with context for the rest of the day's content.
This tutorial introduces about basic programming PHP. In this topic you'll learn how to code PHP and how to develop your first PHP application(Khmer Date)
En esta plática conoceremos las mejores practicas en la construcción de una API, tales como:
-La estructura del JSON a enviar
-Meta tags
-Manejo de errores
-Intercambio de headers para contextualizar la petición
-Internacionalización
Both Apache Pulsar and Apache Flink share a similar view on how the data and the computation level of an application can be “streaming-first” with batch as a special case streaming. With Apache Pulsar’s Segmented-Stream storage and Apache Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies to provide elastic data processing at massive scale, and build a real streaming warehouse.
In this talk, Sijie Guo from Apache Pulsar community will given an overview of Apache Pulsar and how it provides the unified data view to fully leverage Apache Flink unified computation runtime for elastic data processing. He will share the latest integrations between Apache Pulsar and Apache Flink, especially around effectively-once processing and schema integration.
It introduces and illustrates use cases, benefits and problems for Kerberos deployment on Hadoop; how Token support and TokenPreauth can help solve the problems. It also briefly introduces Haox project, a Java client library for Kerberos.
Elastic Data Processing with Apache Flink and Apache PulsarStreamNative
More and more applications are using Flink for low-latency data processing. Flink unifies batch and stream processing using one computation engine. However in reality, in order to really unify batch and stream processing, it requires a data system offers one unified data representation for both batch and streaming data. Nowadays, streaming data is typically stored in a log storage or messaging system, while batch data is stored in distributed filesystem and object stores. That means that data scientists still need write two different computing jobs to access same data stored in different data systems.
Apache Pulsar is the next generation messaging and streaming data system. It was originally built at Yahoo, and has graduated from Apache Incubator and become a Top-Level-Project. Pulsar separates messaging serving and data storage into two layers. Such layered architecture provides high throughput and low-latency while ensuring high availability and scalability. Pulsar’s segment centric storage design along with layered architecture makes Pulsar a perfect unbounded streaming data system, which can well fit into Flink’s computation model.
In this talk, Sijie Guo from Apache Pulsar PMC, will introduce Pulsar and its layered architecture and segment-centric storage, detailing how this architecture can well integrate with Flink to provide elastic unified batch and stream processing.
Dynamic Authorization & Policy Control for Docker EnvironmentsTorin Sandall
How do you enable rapid deployment of innovative applications on top of Docker containers while still satisfying strict requirements from your InfoSec and compliance departments? The Open Policy Agent (OPA), an open-source tool, enables you to update and enforce policies without slowing down developers or modifying application code. In this talk, Justin Cormack (Security Engineer at Docker) and Torin Sandall (Co-founder of the OPA project) will show how you can leverage the integrations between Docker and OPA to enforce fine-grained policies in your organization's container platform while still allowing your developers to move quickly. This talk is targeted at engineers building and operating container platforms who are interested in security and policy enforcement. The audience can expect to take aware fresh ideas about how to enforce fine-grained security policies across their container platform.
Measuring CDN performance and why you're doing it wrongFastly
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content.
In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you.
More specifically, we'll cover:
Different techniques for measuring CDN performance
Differentiating between network footprint and object delivery performance
Choosing the right content to test
Core metrics to focus on and how each impacts real traffic
Understanding cache hit ratio, why it can be misleading, and how to measure for it
Building a Messaging Solutions for OVHcloud with Apache Pulsar_Pierre ZembStreamNative
OVHcloud is the biggest European cloud provider. From dedicated servers to Managed Kubernetes, from VMware® based Hosted Private Cloud to OpenStack-based Public Cloud, we have over 1.4 million customers worldwide.
Internally, we have been running Apache Kafka for years, and despite all the skills obtained operating multiples clusters with millions of messages per second, we decided to shift and build the foundation of our 'topic-as-a-service' product called ioStream on Apache Pulsar.
In this talk, you will have the insights of why we decided to use Apache Pulsar instead of Apache Kafka as the core of ioStream. We will tell you our journey to use Apache Pulsar, from our deployments to the management, what did work and what did not.
Hadoop and Kerberos: the Madness Beyond the Gate: January 2016 editionSteve Loughran
An update of the "Hadoop and Kerberos: the Madness Beyond the Gate" talk, covering recent work "the Fix Kerberos" JIRA and its first deliverable: KDiag
In this day and age, maintaining privacy throughout our electronic communications is absolutely necessary. Creating user accounts, and not exposing your MongoDB environment to the wider internet, are basic concepts that have been missed in the past. Once that has been addressed, individuals and organizations interested in becoming PCI compliant must turn to securing their data through encryption. With MongoDB, we have two options for encryption: at rest and transport encryption.
Securing Hadoop's REST APIs with Apache Knox Gateway Hadoop Summit June 6th, ...Kevin Minder
Securing Hadoop's REST APIs with Apache Knox Gateway
Presented at Hadoop Summit on June 6th, 2014
Describes the overall roles the Apache Knox Gateway plays in Hadoop security and briefly covers its primary features.
The MongoDB Spark Connector integrates MongoDB and Apache Spark, providing users with the ability to process data in MongoDB with the massive parallelism of Spark. The connector gives users access to Spark's streaming capabilities, machine learning libraries, and interactive processing through the Spark shell, Dataframes and Datasets. We'll take a tour of the connector with a focus on practical use of the connector, and run a demo using both Spark and MongoDB for data processing.
Practical Elasticsearch - real world use casesItamar
Elasticsearch - a search and real-time analytics server based on Apache Lucene - is gaining a lot of popularity lately, and is being used world-wide to power many sophisticated systems. While many use it for the "standard" stuff (that is, simple full-text search and real-time log analysis), there are some really interesting usage patterns that can prove useful in many real-world scenarios. In this talk we will briefly talk about Elasticsearch and its common use-cases, and then showcase some less common use-cases leveraging Elasticsearch in an interesting and often times innovating ways.
Redundancy and high availability are the basis for all production deployments. Database systems with large data sets or high throughput applications can challenge the capacity of a single server like CPU for high query rates or RAM for large working sets. Adding more CPU and RAM for vertical scaling is limited. Systems need horizontal scaling by distributing data across multiple servers. MongoDB supports horizontal scaling through sharding.
Redundancy and high availability are the basis for all production deployments. With MongoDB this can be achieved by deploying a replica set. In this talk, we'll explore how MongoDB replication works and what the components are of a replica set. Using examples of wrong deployment configurations, we will highlight how to properly run replica sets in production, whether it comes to on-premise deployment or in the cloud.
Learn how to build new classes of sophisticated, real-time analytics by combining Apache Spark, the industry's leading data processing engine, with MongoDB, the industry’s fastest growing database.
We live in a world of “big data.” But it isn’t just the data itself that is valuable – it’s the insight it can generate. How quickly an organization can unlock and act on that insight has become a major source of competitive advantage. Collecting data in operational systems and then relying on nightly batch extract, transform, load (ETL) processes to update the enterprise data warehouse (EDW) is no longer sufficient.
In this live session, we show you how MongoDB and Spark work together and provide examples using the new Spark Connector for MongoDB.
This session was sponsored by Stratio & Paradigma.
With the popularization of github, the lack of security control in commits has exposed several sensitive data that can compromise both companies and ordinary users. To make the search easier, I've created a script to automate and collect the results. This script is in version 2.0 with some implementations and improvements in the code, I will demonstrate how to perform the collection and how this can cause a great impact.
Exploring the replication and sharding in MongoDBIgor Donchovski
Redundancy and high availability are the basis for all production deployments. Database systems with large data sets or high throughput applications can challenge the capacity of a single server like CPU for high query rates or RAM for large working sets. Adding more CPU and RAM for vertical scaling is limited. Systems need horizontal scaling by distributing data across multiple servers. MongoDB supports horizontal scaling through sharding. Each shard consist of replica set that provides Redundancy and high availability.
Dev Jumpstart: Build Your First App with MongoDBMongoDB
New to MongoDB? This talk will introduce the philosophy and features of MongoDB. We’ll discuss the benefits of the document-based data model that MongoDB offers by walking through how one can build a simple app to store books. We’ll cover inserting, updating, and querying the database of books. This session will jumpstart your knowledge of MongoDB development, providing you with context for the rest of the day's content.
This tutorial introduces about basic programming PHP. In this topic you'll learn how to code PHP and how to develop your first PHP application(Khmer Date)
En esta plática conoceremos las mejores practicas en la construcción de una API, tales como:
-La estructura del JSON a enviar
-Meta tags
-Manejo de errores
-Intercambio de headers para contextualizar la petición
-Internacionalización
Beautiful REST and JSON APIs - Les Hazlewoodjaxconf
Designing a really clean and intuitive REST + JSON API is no small feat. You have to worry about resources, collections of resources, pagination, query parameters, references to other resources, which HTTP Methods to use, HTTP Caching, security, and more! And you have to make sure it lasts and doesn't break clients as you add features over time. Further, while there are many references on creating REST APIs with XML, there are many fewer references for REST + JSON.
Presented at GlobusWorld 2022 by Rachana Ananthakrishnan from Globus. Describes the Globus platform and how developers can access Globus service APIs in their applications.
Representational State Transfer, or REST, has become the hip, new buzzword of Web 2.0. But what really makes an application RESTful? Is it pretty URLs? Or the use of XML over HTTP? Is it any web service that doesn't use SOAP? In all of the hype, the definition of REST has become clouded and diluted.
It's time to take a fresh look at REST. In this talk, Ben Ramsey reintroduces REST and its architectural style. He shows that REST is not only an architecture for web services but that it describes an architecture for the Web. Ramsey will demonstrate how statelessness, a resource-oriented architecture, atomicity of requests, and other traits of REST make the most of the Web's architecture to provide scalable and simpler web services turning the Web into a platform by which rich clients can access and manipulate data.
Azure DocumentDB for Healthcare Integration - Part 2BizTalk360
This is the second of a three-part series. The following is the agenda for Part 2 –
Review of DocumentDB REST API
Understanding the overall problem
High-level Design
How Swagger fits in
Design and development
Next steps
REST has become the hip, new buzzword of Web 2.0. But what makes an application RESTful? Pretty URLs? XML over HTTP? Any service that's not SOAP? In all the hype, the definition of REST has become clouded and diluted.
Forget what you thought you knew about REST. In this talk, Ben Ramsey reintroduces REST, placing it under a microscope, uncovering each constraint that forms REST's crucial principles. Ramsey explains how REST is a style for network-based software applications, emphasizing scalability and efficiency through separation of concerns and taking advantage of the Web as a platform for rich Internet applications.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
8. Principles Of Content Negotiation
https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation
9. Server Driven Content Negotiation
• Client requests for resource
• Client adds HTTP headers along with URL
• Accept
• Accept-Charset
• Accept-Encoding
• Accept-Language
• Server uses headers to choose a representation
• Server returns 406 Not Acceptable status code if it
can’t find the representation.
10. Server Driven Content Negotiation
https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation
11. Accept Header
The Accept request header field can be used to specify
media types which are acceptable for the response.
GET /data/miran
Accept: image/jpg
12. Accept Header
The Accept header can also have a quality factor, a
parameter indicating the relative degree of preference
between the different Media types.
GET /data/miran
Accept: image/jpg,
image/png; q=0.8,
image/*; q=0.5,
`application/json; q=0.1
13. Accept Header
List of some media types eligible for Accept header:
● Application
● Audio
● Font
● Image
● Text
● Video
14. Accept-Charset Header
The Accept-Charset header indicates to the server what
kinds of character encodings are understood by the
user-agent.
Accept-Charset: ISO-8859-1,utf-8;q=0.8,*;q=0.7
15. Accept-Encoding Header
The Accept-Encoding header defines the acceptable
content-encoding. The value is a q-factor list that
indicates the priority of the encoding values.
Accept-Encoding: deflate, gzip;q=1.0, *;q=0.5
16. Accept-Language Header
The Accept-Language header is similar to Accept, but
restricts the set of natural languages that are preferred
as a response to the request.
GET /home
Accept-Language: no,
en-us;q=0.8,
en;q=0.7
17. Other Headers for Content
Negotiation
• The Accept-CH header
• The Accept-CH-Lifetime header
• The User-Agent header
• The Vary response header
18. Content-Type Negotiation
• The server introspects the Content-Type header
• Returns 415 Unsupported Media Type status code if
content type is not supported
POST /index HTTP/1.1
Accept: application/json
Content-Type: application/json
{
"foo": "bar"
}
19. RESTful Versioning Using Content Negotiation
Putting version in the URL couples the identity (the URL)
of the resource to its representation. That was never the
intent of REST. Instead, we should use Content
Negotiation to version representations.
20. RESTful Versioning Using Content Negotiation
{
“name": {
"first" : "Chris",
"last": "Morris"
}
}
{
“name": {
"full" : "Chris
Morris"
}
}
Version
1
Version
2
21. RESTful Versioning Using Content Negotiation
HTTP/1.1 200 OK
Content-Type:
application/vnd.linn.user+json;
version=1
{
“name": {
"first" : "Chris",
"last": "Morris"
}
}
GET /users/42 HTTP/1.1
Accept:
application/vnd.linn.user+json;
version=1
Request
Response
22. RESTful Versioning Using Content Negotiation
GET /users/42 HTTP/1.1
Accept:
application/vnd.linn.user+json;
version=2
Request
Response
HTTP/1.1 200 OK
Content-Type:
application/vnd.linn.user+json;
version=2
{
“name": {
"full" : "Chris Morris"
}
}
25. Disadvantage
• The server doesn't have total knowledge of
browser.
• Shared caches are less efficient.
• The information by the client is quite verbose.
26. Conclusion
If we want to build a true REST API, we should seriously
consider using Content Negotiation. In this way, we can
effectively decouple versioning from the identity of each
resource and have a clean URL pointing to only resource.