Counterintuitive observations in optimizing search engine algorithms. Don't get fooled if your user satisfaction goes up - it might not be all good...
Topics covered include:
1. How to test end user satisfaction and dissatisfaction.
2. How do you optimize your search engine algorithms. Intuitive and counterintuitive learnings from optimizing. What is best? General purpose vs. specializing your search engine on focus areas? Optimizing search for each user vs. users grouped in cohorts? How relevant are individual wrong results in a list of good query results? What is the impact of usage scenarios, target devices & environment? ...
3. Resulting strategies
A LOCATION-BASED RECOMMENDER SYSTEM FRAMEWORK TO IMPROVE ACCURACY IN USERBASE...ijcsa
Recommender systems are utilized to predict and recommend relevant items to system users. Item could be
in any forms such as documents, location, movie and articles. The mechanism of recommender system is
based on examination which includes users’ behaviors, item ratings, various logs (e.g. user’s history log)
and, social connections. The main objective of the examination is to predict items which have great potential to be liked by users. Although, traditional recommender systems have been very successful to predict what user might like, they did not take into consideration contextual information such as users’
location. In this paper, we propose a new framework with the aim of enhancing accuracy of recommendations in user-based collaborative filtering by considering about users’ locations.
A location based movie recommender systemijfcstjournal
Available recommender systems mostly provide recommendations based on the users’ preferences by
utilizing traditional methods such as collaborative filtering which only relies on the similarities between users and items. However, collaborative filtering might lead to provide poor recommendation because it does not rely on other useful available data such as users’ locations and hence the accuracy of the recommendations could be very low and inefficient. This could be very obvious in the systems that locations would affect users’ preferences highly such as movie recommender systems. In this paper a new locationbased movie recommender system based on the collaborative filtering is introduced for enhancing the
accuracy and the quality of recommendations. In this approach, users’ locations have been utilized and
take in consideration in the entire processing of the recommendations and peer selections. The potential of
the proposed approach in providing novel and better quality recommendations have been discussed through experiments in real datasets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A LOCATION-BASED RECOMMENDER SYSTEM FRAMEWORK TO IMPROVE ACCURACY IN USERBASE...ijcsa
Recommender systems are utilized to predict and recommend relevant items to system users. Item could be
in any forms such as documents, location, movie and articles. The mechanism of recommender system is
based on examination which includes users’ behaviors, item ratings, various logs (e.g. user’s history log)
and, social connections. The main objective of the examination is to predict items which have great potential to be liked by users. Although, traditional recommender systems have been very successful to predict what user might like, they did not take into consideration contextual information such as users’
location. In this paper, we propose a new framework with the aim of enhancing accuracy of recommendations in user-based collaborative filtering by considering about users’ locations.
A location based movie recommender systemijfcstjournal
Available recommender systems mostly provide recommendations based on the users’ preferences by
utilizing traditional methods such as collaborative filtering which only relies on the similarities between users and items. However, collaborative filtering might lead to provide poor recommendation because it does not rely on other useful available data such as users’ locations and hence the accuracy of the recommendations could be very low and inefficient. This could be very obvious in the systems that locations would affect users’ preferences highly such as movie recommender systems. In this paper a new locationbased movie recommender system based on the collaborative filtering is introduced for enhancing the
accuracy and the quality of recommendations. In this approach, users’ locations have been utilized and
take in consideration in the entire processing of the recommendations and peer selections. The potential of
the proposed approach in providing novel and better quality recommendations have been discussed through experiments in real datasets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
What Is UX Research & How Is It Done.pptxTurboAnchor
<a href="https://turboanchor.com/what-is-ux-research/">What Is UX Research & How Is It Done?</a>
To develop products that can meet users’ needs (and captivate them as well), you first have to determine who your user is and what their needs are. That’s where user experience (UX) research comes in. UX researchers thoroughly research target users to collect and interpret data that assist inform the product design process.
So for more in-depth knowledge, let’s take a closer look at what is UX research is,
UX research is learning about target users, product needs, and wants, then using those insights to enhance the design process. UX researchers follow different methods to discover problems & design opportunities. It’s all about finding insights to direct successful design. As discussed above, when you conduct UX research, you can provide the best solutions as you know what they need. It can be applied at any stage of the design process.
Web search engines help users find useful information on the WWW. However, when the same
query is submitted by different users, typical search engines return the same result regardless of who
submitted the query. Generally, each user has different information needs for his/her query. Therefore,
the search results should be adapted to users with different information needs. So, there is need of
several approaches to adapting search results according to each user’s need for relevant information
without any user effort. Such search systems that adapt to each user’s preferences can be achieved by
constructing user profiles based on modified collaborative filtering with detailed analysis of user’s
browsing history.
There are three possible types of web search system which can provide personalized
information: (1) systems using relevance feedback, (2) systems in which users register their interest, and
(3) systems that recommend information based on user’s history. In first technique, users have to provide
feedback on relevant or irrelevant judgments which is time consuming and the second one needs
registration of users with their static interests which need extra effort from user. So, the third technique
is best in which users don’t have to give explicit rating; relevancy automatically tracked by user
behavior with search results and history of data usage. It doesn’t require registration of interests; it
captures changing interests of user dynamically by itself. The result section shows that user’s browsing
history allows each user to perform more fine-grained search by capturing changes of each user’s
preferences without any user effort. Users need less time to find the relevant snippet in personalized
search results compared to original results
Web search engines help users find useful information on the WWW. However, when the same query is submitted by different users, typical search engines return the same result regardless of who submitted the query. Generally, each user has different information needs for his/her query. Therefore, the search results should be adapted to users with different information needs. So, there is need of
several approaches to adapting search results according to each user’s need for relevant information without any user effort. Such search systems that adapt to each user’s preferences can be achieved by constructing user profiles based on modified collaborative filtering with detailed analysis of user’s browsing history. There are three possible types of web search system which can provide personalized information: (1) systems using relevance feedback, (2) systems in which users register their interest, and (3) systems that recommend information based on user’s history. In first technique, users have to provide feedback on relevant or irrelevant judgments which is time consuming and the second one needs
registration of users with their static interests which need extra effort from user. So, the third technique is best in which users don’t have to give explicit rating; relevancy automatically tracked by user behavior with search results and history of data usage. It doesn’t require registration of interests; it captures changing interests of user dynamically by itself. The result section shows that user’s browsing history allows each user to perform more fine-grained search by capturing changes of each user’s
preferences without any user effort. Users need less time to find the relevant snippet in personalized
search results compared to original results.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Auditing search engines for differential satisfaction across demographicsAmit Sharma
Many online services, such as search engines, social media platforms, and digital marketplaces, are advertised as being available to any user, regardless of their age, gender, or other demographic factors. However, there are growing concerns that these services may systematically underserve some groups of users. In this work, we present a framework for internally auditing such services for differences in user satisfaction across demographic groups, using search engines as a case study. We first explain the pitfalls of naively comparing the behavioral metrics that are commonly used to evaluate search engines. We then propose three methods for measuring latent differences in user satisfaction from observed differences in evaluation metrics. To develop these methods, we drew on ideas from the causal inference and multilevel modeling literature. Our framework is broadly applicable to other online services, and provides general insight into interpreting their evaluation metrics.
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PMSE captures the users’ preferences in the form of concepts by mining their click through data.
Classification of location information
-Content concept
-Location concept
Users’ locations (positioned by GPS) are also used.
The user preferences are organized in an ontology-based, multi-facet user profile.
The client- collects and stores locally clickthrough data to protect privacy.
At server- concept extraction, training and reranking.
Privacy issue – is taken care by restricting the information in the user profile.
We prototype PMSE on the Google Android platform.
Results show that PMSE significantly improves the precision comparing to the baseline.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Personalized E-commerce based recommendation systems using deep-learning tech...IAESIJAI
As technology is surpassing each day, with the variation of personalized drifts
relevant to the explicit behavior of users using the internet. Recommendation
systems use predictive mechanisms like predicting a rating that a customer
could give on a specific item. This establishes a ranked list of items according
to the preferences each user makes concerning exhibiting personalized
recommendations. The existing recommendation techniques are efficient in
systematically creating recommendation techniques. This approach
encounters many challenges such as determining the accuracy, scalability, and
data sparsity. Recently deep learning attains significant research to enhance
the performance to improvise feature specification in learning the efficiency
of retrieving the necessary information as well as a recommendation system
approach. Here, we provide a thorough review of the deep-learning
mechanism focused on the learning-rates-based prediction approach modeled
to articulate the widespread summary for the state-of-art techniques. The
novel techniques ensure the incorporation of innovative perspectives to
pertain to the unique and exciting growth in this field.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
The size of the Internet enlarging as per to grow the users of search providers continually demand search
results that are accurate to their wishes. Personalized Search is one of the options available to users in
order to sculpt search results based on their personal data returned to them provided to the search
provider. This brings up fears of privacy issues however, as users are typically anxious to revealing
personal info to an often faceless service provider along the Internet. This work proposes to administer
with the privacy issues surrounding personalized search and discusses ways that privacy can be improved
so that users can get easier with the dismissal of their personal information in order to obtain more precise
search results.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
What Is UX Research & How Is It Done.pptxTurboAnchor
<a href="https://turboanchor.com/what-is-ux-research/">What Is UX Research & How Is It Done?</a>
To develop products that can meet users’ needs (and captivate them as well), you first have to determine who your user is and what their needs are. That’s where user experience (UX) research comes in. UX researchers thoroughly research target users to collect and interpret data that assist inform the product design process.
So for more in-depth knowledge, let’s take a closer look at what is UX research is,
UX research is learning about target users, product needs, and wants, then using those insights to enhance the design process. UX researchers follow different methods to discover problems & design opportunities. It’s all about finding insights to direct successful design. As discussed above, when you conduct UX research, you can provide the best solutions as you know what they need. It can be applied at any stage of the design process.
Web search engines help users find useful information on the WWW. However, when the same
query is submitted by different users, typical search engines return the same result regardless of who
submitted the query. Generally, each user has different information needs for his/her query. Therefore,
the search results should be adapted to users with different information needs. So, there is need of
several approaches to adapting search results according to each user’s need for relevant information
without any user effort. Such search systems that adapt to each user’s preferences can be achieved by
constructing user profiles based on modified collaborative filtering with detailed analysis of user’s
browsing history.
There are three possible types of web search system which can provide personalized
information: (1) systems using relevance feedback, (2) systems in which users register their interest, and
(3) systems that recommend information based on user’s history. In first technique, users have to provide
feedback on relevant or irrelevant judgments which is time consuming and the second one needs
registration of users with their static interests which need extra effort from user. So, the third technique
is best in which users don’t have to give explicit rating; relevancy automatically tracked by user
behavior with search results and history of data usage. It doesn’t require registration of interests; it
captures changing interests of user dynamically by itself. The result section shows that user’s browsing
history allows each user to perform more fine-grained search by capturing changes of each user’s
preferences without any user effort. Users need less time to find the relevant snippet in personalized
search results compared to original results
Web search engines help users find useful information on the WWW. However, when the same query is submitted by different users, typical search engines return the same result regardless of who submitted the query. Generally, each user has different information needs for his/her query. Therefore, the search results should be adapted to users with different information needs. So, there is need of
several approaches to adapting search results according to each user’s need for relevant information without any user effort. Such search systems that adapt to each user’s preferences can be achieved by constructing user profiles based on modified collaborative filtering with detailed analysis of user’s browsing history. There are three possible types of web search system which can provide personalized information: (1) systems using relevance feedback, (2) systems in which users register their interest, and (3) systems that recommend information based on user’s history. In first technique, users have to provide feedback on relevant or irrelevant judgments which is time consuming and the second one needs
registration of users with their static interests which need extra effort from user. So, the third technique is best in which users don’t have to give explicit rating; relevancy automatically tracked by user behavior with search results and history of data usage. It doesn’t require registration of interests; it captures changing interests of user dynamically by itself. The result section shows that user’s browsing history allows each user to perform more fine-grained search by capturing changes of each user’s
preferences without any user effort. Users need less time to find the relevant snippet in personalized
search results compared to original results.
IJRET : International Journal of Research in Engineering and TechnologyImprov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Auditing search engines for differential satisfaction across demographicsAmit Sharma
Many online services, such as search engines, social media platforms, and digital marketplaces, are advertised as being available to any user, regardless of their age, gender, or other demographic factors. However, there are growing concerns that these services may systematically underserve some groups of users. In this work, we present a framework for internally auditing such services for differences in user satisfaction across demographic groups, using search engines as a case study. We first explain the pitfalls of naively comparing the behavioral metrics that are commonly used to evaluate search engines. We then propose three methods for measuring latent differences in user satisfaction from observed differences in evaluation metrics. To develop these methods, we drew on ideas from the causal inference and multilevel modeling literature. Our framework is broadly applicable to other online services, and provides general insight into interpreting their evaluation metrics.
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
User search goal inference and feedback session using fast generalized – fuzz...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PMSE captures the users’ preferences in the form of concepts by mining their click through data.
Classification of location information
-Content concept
-Location concept
Users’ locations (positioned by GPS) are also used.
The user preferences are organized in an ontology-based, multi-facet user profile.
The client- collects and stores locally clickthrough data to protect privacy.
At server- concept extraction, training and reranking.
Privacy issue – is taken care by restricting the information in the user profile.
We prototype PMSE on the Google Android platform.
Results show that PMSE significantly improves the precision comparing to the baseline.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Personalized E-commerce based recommendation systems using deep-learning tech...IAESIJAI
As technology is surpassing each day, with the variation of personalized drifts
relevant to the explicit behavior of users using the internet. Recommendation
systems use predictive mechanisms like predicting a rating that a customer
could give on a specific item. This establishes a ranked list of items according
to the preferences each user makes concerning exhibiting personalized
recommendations. The existing recommendation techniques are efficient in
systematically creating recommendation techniques. This approach
encounters many challenges such as determining the accuracy, scalability, and
data sparsity. Recently deep learning attains significant research to enhance
the performance to improvise feature specification in learning the efficiency
of retrieving the necessary information as well as a recommendation system
approach. Here, we provide a thorough review of the deep-learning
mechanism focused on the learning-rates-based prediction approach modeled
to articulate the widespread summary for the state-of-art techniques. The
novel techniques ensure the incorporation of innovative perspectives to
pertain to the unique and exciting growth in this field.
A New Algorithm for Inferring User Search Goals with Feedback SessionsIJERA Editor
When different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. The Novel approach to infer user search goals by analyzing search engine query logs. Once the User entered the query, the Resultant URLs will be filtered and the Pseudo-Documents are generated. Once the Pseudo documents are generated the Server will apply the Clustering Mechanism to URL’s. So that the URLs are listed as different categories. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of user. Second, we propose a novel approach to generate pseudo documents to better represents the feedback sessions for clustering. Finally we proposed new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods. Third, the distributions of user search goals can also be useful in applications such as re ranking web search results that contain different user search goals.
The size of the Internet enlarging as per to grow the users of search providers continually demand search
results that are accurate to their wishes. Personalized Search is one of the options available to users in
order to sculpt search results based on their personal data returned to them provided to the search
provider. This brings up fears of privacy issues however, as users are typically anxious to revealing
personal info to an often faceless service provider along the Internet. This work proposes to administer
with the privacy issues surrounding personalized search and discusses ways that privacy can be improved
so that users can get easier with the dismissal of their personal information in order to obtain more precise
search results.
Similar to Developing and testing search engine algorithms – (20)
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
1. Developing and Testing Search
Engine Algorithms –
Counterintuitive observations due to end
user behavior. Suggestions.
(This presentation is not representing views of my current employer)
CHRISTIAN VON REVENTLOW (VONREVENTLOW)
VONREVENTLOW@YAHOO.COM, +1 201 259 5973
2. Use a data driven feedback loop to
evolve and test search engine algorithms
Search Engine
Algorithm
Queries
Query
results Tester
or
End user
Search
End-User
Interface
Queries
Results
Rate results
Data Scientists,
Algorithm- and
Software
Developers
Software
Compute
Search
Result
Quality
Statics
Search Quality Metrics
3. Measuring search performance
• Understanding behavior and needs of satisfied and unsatisfied search users is key for improving the
users search experience [0]
• Satisfaction/dissatisfaction data is used to evolve and optimize the search algorithms [1].
• Traces from end users or a subset thereof
• Testers creating sample queries.
• Metrics like MAP (mean average precision) or NDCG (normalized discounted cumulative gain) had
been used to measure search quality. [2]. A Click-through was used to judge relevance of results.
• Nowadays metrics use the entire sequence of events in a search. An example is modelling search
logs in Markov Models to get estimators of user satisfaction or dissatisfaction [1,3].
[0] A Dasdan, K Tsioutsiouliklis, E. Velipasaoglu. “Web Search Engine Metrics”, WWW2010
[1] A Hassan, Y Song, Li-Wei He. “A Task Level Metric for Improving Web Search Satisfaction and its Application on Improving Relevance Estimation”, ACM CIKM’11 Oct 2011
[2] K. Jaervelin, J. Kekalainen. “Cumulated gain based evaluation of IR techniques”, ACM TOIS 2002
[3] A. Hassan, R. White. “Personalized Models of Search Satisfaction”, CIKM Nov 2013
4. Specialization improves end user
satisfaction in search
• Economic theory: specialized search engines deliver an
advantage – specifically when its not only about attracting
as many searchers but satisfying as many of them [1].
• Real data: shows user satisfaction increases when grouping
users in cohorts with similar topical interests and optimize
for each relevant cohort [2].
• Best by combining search results that are valid for
everybody (Global) and the specific cohort only (personal).
• The Counterintuitive: User satisfaction is better when
optimizing search for the cohort – vs. optimizing for each
individual.
Users profit from the larger feedback dataset to the search
algorithms provided by a cohort of similar people.
[1] D. Kempe, B. Lucier. “User Satisfaction in Competitive Sponsored Search”. Cornell University.
arXiv:1310.4098v1 [cs.GT]
[2] A. Hassan, R. White. “Personalized Models of Search Satisfaction”. CIKM Nov 2013
6.00%
5.00%
4.00%
3.00%
2.00%
1.00%
0.00%
Example percentage gain in accuracy
vs optimizing for single target audience
Optimize by Topic Optimize by larger
Cohort
Optimize by smaller
Cohort
Global & Personal combined Personal only
5. Its not sufficient that “the right result” is
part of the results list
• Studies have shown that users are only interested in
the first few results – thus high accuracy is desirable
[1]
• Temporal and geographic relevance, coverage,
comprehensiveness, rapid discovery of new content,
content freshness and diversity are vectors relevant
Result 1- relevant
for users [2]
Result 2 - right results
Result 3 – wrong/irrelevant
• Users search environment has a major impact – like
search on a mobile device vs. search from a tablet –
Result 4 – relevant
requiring specialization.
Result 5 - relevant
• The counterintuitive: Even if the “right result” is part
of the first few results - having irrelevant/perceived
wrong results makes the user disbelief in the
correctness of ALL results.
Query results
[1] R.W. White. D. Morris. “Investigating the querying and browsing behavior of advanced search engine users” Proc. SIGIR
[2] A Dasdan, K Tsioutsiouliklis, E. Velipasaoglu. “Web Search Engine Metrics”, WWW2010
6. Don’t get fooled if your user satisfaction
goes up – it might not be all good..
• Behavioral differences have been shown between novice
and expert searchers [1]
• Optimizing differently for experts and casual users increases
user satisfaction. [2]
• The Counterintuitive: User satisfaction goes up over time
even if you do not modify the algorithms.
• Why:
• Users learn how to query best (i.e. become mature users)
• Learned what not to ask – i.e. intuitively restrict the usage space
• Or worse: defect to other search engines. 60% of switches to a
different engine are caused by dissatisfaction [3]
• So don’t get fooled – understand why your satisfaction went
up…
[1] R.W. White. D. Morris. “Investigating the querying and browsing behavior of advanced search engine
users” Proc. SIGIR
[2] A. Hassan, R. White. “Personalized Models of Search Satisfaction”. CIKM Nov 2013
[3] Q. Guo, R.W. White, Y. Zhang, B. Anderson, S Dumais. “Why searchers switch: understanding and
predicting engine switching rationales”. Proc. SIGIR 2011
6.00%
5.00%
4.00%
3.00%
2.00%
1.00%
0.00%
Example percentage gain in accuracy
vs optimizing for single target audience
Optimize by Topic Optimize for expert
vs. casual user
Optimize by larger
Cohort
Global & Personal combined Personal only
7. Resulting Strategies
• Specialization/Focus: Get clarify of what your search engine is targeted for – vs. a general
purpose web search. Examples are places on a map, images, research papers.
• Cohorts: Segment your user base in cohorts and optimize for each of them.
• Start with expert and casual user.
• Interview users, analyze search traces, … to identify other larger cohorts.
• Usage: Optimize for usage environment & target device.
• Smartphone, Tablet, PC, Professional multiscreen office setup.
• Correctness: Carefully evaluate the dissatisfying query results. And minimize them.
• Fresh end user participants for testing: Regularly recruit new groups of users to optimize your
algorithms – specifically people who have never used your search before.