Presented by Kathy Phillips, Enterprise Search Services Manager/VP, Wells Fargo & Co.
& Tom Lutmer, eBusiness Systems Consultant, Enterprise Search Services team, Wells Fargo & Co.
What is enterprise search? Is it a single search box that spans all enterprise resources or is it much more than that? Explore how enterprise search applications can move beyond simple keyword search to add unique business value. Attendees will learn about the benefits and challenges to different types of search applications such as site search, interactive search, search as business intelligence, and niche search applications. Join the discussion about the possibilities and future direction of new business applications within the enterprise.
Sp24 design a share point 2013 architecture – the basicsAlexander Meijers
This session walks you through the necessary steps to design a SharePoint 2013 architecture. It explains what information is needed to actually design such an architecture and discusses the many things you need to know to make the right decisions. It helps you to design a small, medium or large SharePoint farm for your customers.
Adhere Solutions, All Access Connector Suite for Google Search ApplianceAdhereSolutions
Product overview for the All Access Suite of indexing and OneBox connectors for the Google Search Appliance by Adhere Solutions. www.adheresolutions.com
Adhere Solutions is a Google Enterprise Partner providing products and services that help organizations accelerate their adoption of Google technologies and cloud computing. Adhere's team of consultants help customers leverage Google's Enterprise Search products, Google Maps, and Google Apps to improve access to information, productivity, and collaboration.
Sp24 design a share point 2013 architecture – the basicsAlexander Meijers
This session walks you through the necessary steps to design a SharePoint 2013 architecture. It explains what information is needed to actually design such an architecture and discusses the many things you need to know to make the right decisions. It helps you to design a small, medium or large SharePoint farm for your customers.
Adhere Solutions, All Access Connector Suite for Google Search ApplianceAdhereSolutions
Product overview for the All Access Suite of indexing and OneBox connectors for the Google Search Appliance by Adhere Solutions. www.adheresolutions.com
Adhere Solutions is a Google Enterprise Partner providing products and services that help organizations accelerate their adoption of Google technologies and cloud computing. Adhere's team of consultants help customers leverage Google's Enterprise Search products, Google Maps, and Google Apps to improve access to information, productivity, and collaboration.
Ladies Be Architects - Study Group III: OAuth 2.0 (Ep 1)gemziebeth
Join us as we look into OAuth 2.0 to help us study for the Identity and Access Management Designer certification exam. Episode 1 covers the principles of OAuth 2.0
Microsoft 70-331 Share Point Server 2013 Complete Trainingmaefrova
Get latest pass4sure 70-331 practice exam dumps and pass your exam easily. We are providing you 70-331 dumps in really cheap price. https://www.pass4sureexam.com/70-331.html
Salesforce Backup, Restore & Archiving- Adam Best, Senior Program Architectgemziebeth
- How Salesforce protects your data
- Backup Options
- Salesforce Native Backup Tools
- Heroku External Objects
- Archiving Options
- Where Can You Go Next To Learn More
This presentation explains the details of all search components, how to properly configure your search topology, and your options to extend your search farm in a hybrid “cloud/on-prem” scenario. You will learn what you need to consider to design your search, in order to handle your organization's needs. We will dive into scripting a high availability search topology, keeping it healthy and manage your day-to-day search operations.
Learn about how to optimize your search for best performance and search relevancy, to support reliable search applications. Together, we will review where Search lives in the farm, the crawl components of search to implement a scalable farm.
PoolParty is a thesaurus management system and a SKOS editor for the Semantic Web including text mining and linked data capabilities. The system helps to build and maintain multilingual thesauri providing an easy-to-use interface. PoolParty server provides semantic services to integrate semantic search or recommender systems into systems like CMS, DMS, CRM or Wikis
Share point meet ECM at SharePoint SaturdayChris Riley ☁
Here is the talk I gave at the recent SharePoint Saturday in San Ramon California. The purpose of the talk was to point out the disconnect between the SharePoint and ECM worlds, and how SharePoint 2010 is now an accepted ECM platform. Some of the slides are from the SharePoint product team deck.
Search-Driven Applications with SharePoint 2013 (#SBSBE16)Maximilian Melcher
SharePoint Search is more than just a search box. Based on the extended search architecture of SharePoint 2013 and the now fully integrated FAST Search a modern, search-based solution can be designed and implemented.
In this session the different Search APIs will be introduced and presented. Additional suggestions, impulses and hints showing you how you can develop Search-Driven Applications based on SharePoint Search
Introduction to the Office Dev PnP Core LibrariesEric Shupps
The Office 365 Developer Patterns and Practices (PnP) team have released two libraries focused on increase developer productivity by reducing the amount of code needed when building remote applications. The Office 365 Developer PnP Core Component is a managed code library that extends and encapsulates commonly used Client Object Model operations. The Office 365 Developer PnP JavaScript Core Library is a JavaScript library that simplifies the use of the REST API. After a brief discussion of the Office 365 Developer Patterns and Practices project as a whole, we’ll move to demos showing how you get started using the Office 365 Developer PnP Core libraries.
Ladies Be Architects - Study Group III: OAuth 2.0 (Ep 1)gemziebeth
Join us as we look into OAuth 2.0 to help us study for the Identity and Access Management Designer certification exam. Episode 1 covers the principles of OAuth 2.0
Microsoft 70-331 Share Point Server 2013 Complete Trainingmaefrova
Get latest pass4sure 70-331 practice exam dumps and pass your exam easily. We are providing you 70-331 dumps in really cheap price. https://www.pass4sureexam.com/70-331.html
Salesforce Backup, Restore & Archiving- Adam Best, Senior Program Architectgemziebeth
- How Salesforce protects your data
- Backup Options
- Salesforce Native Backup Tools
- Heroku External Objects
- Archiving Options
- Where Can You Go Next To Learn More
This presentation explains the details of all search components, how to properly configure your search topology, and your options to extend your search farm in a hybrid “cloud/on-prem” scenario. You will learn what you need to consider to design your search, in order to handle your organization's needs. We will dive into scripting a high availability search topology, keeping it healthy and manage your day-to-day search operations.
Learn about how to optimize your search for best performance and search relevancy, to support reliable search applications. Together, we will review where Search lives in the farm, the crawl components of search to implement a scalable farm.
PoolParty is a thesaurus management system and a SKOS editor for the Semantic Web including text mining and linked data capabilities. The system helps to build and maintain multilingual thesauri providing an easy-to-use interface. PoolParty server provides semantic services to integrate semantic search or recommender systems into systems like CMS, DMS, CRM or Wikis
Share point meet ECM at SharePoint SaturdayChris Riley ☁
Here is the talk I gave at the recent SharePoint Saturday in San Ramon California. The purpose of the talk was to point out the disconnect between the SharePoint and ECM worlds, and how SharePoint 2010 is now an accepted ECM platform. Some of the slides are from the SharePoint product team deck.
Search-Driven Applications with SharePoint 2013 (#SBSBE16)Maximilian Melcher
SharePoint Search is more than just a search box. Based on the extended search architecture of SharePoint 2013 and the now fully integrated FAST Search a modern, search-based solution can be designed and implemented.
In this session the different Search APIs will be introduced and presented. Additional suggestions, impulses and hints showing you how you can develop Search-Driven Applications based on SharePoint Search
Introduction to the Office Dev PnP Core LibrariesEric Shupps
The Office 365 Developer Patterns and Practices (PnP) team have released two libraries focused on increase developer productivity by reducing the amount of code needed when building remote applications. The Office 365 Developer PnP Core Component is a managed code library that extends and encapsulates commonly used Client Object Model operations. The Office 365 Developer PnP JavaScript Core Library is a JavaScript library that simplifies the use of the REST API. After a brief discussion of the Office 365 Developer Patterns and Practices project as a whole, we’ll move to demos showing how you get started using the Office 365 Developer PnP Core libraries.
The Searchmaster's Toolbox - David Hawking, Funnelback SearchSquiz
David Hawking, pre-eminent information retrieval researcher and Funnelback's Chief Scientist, gave this talk on the need for a Search Master within all but the smallest organisations at a Funnelback Seminar in London on March 31st, 2010. Even if there isn't an individual with that specific job title, the responsibility for maintaining, improving and monitoring search needs to be prioritised and clearly assigned. David's presentation covers the reasons why search is so vitally important and the tools which can improve search results.
Making IA Real: Planning an Information Architecture StrategyChiara Fox Ogan
Presented at Internet Librarian conference in 2001. Provides an introduction to what information architecture is and how you can use the methods to develop a good website.
EPC Group - Comprehensive Overview of SharePoint 2010's Enterprise Search Cap...EPC Group
EPC Group - Comprehensive Overview of SharePoint 2010's Enterprise Search Capabilities - To assist with you roadmap planning and to help you and your organization with understanding what is possible with SharePoint 2010
Search Strategy for Enterprise SharePoint 2013 - Vancouver SharePoint SummitJoel Oleson
The Four Pillars of Search really help you focus your search planning. In this session we dig into the context, content, metadata and UX or user experience that really matter. We also dig into a variety of publicly accessible SharePoint 2013 real world search pages to demonstrate the value.
2017 01-11 intelligent search and intranet - chihuahuas vs muffins v1Don Miller
This is a presentation for people looking to improve Enterprise Search and Intranets. It provides details around Microsoft Search, Azure Search and Elastic Search and how to take a basic search platform and transform it into what Gartner calls Insight Engines and what Forrester calls Cognitive Search and Knowledge Discovery.
I'm presenting the IBM CIO 2010 Outlook at IBM iForum, Zurich (26th November 2007). I can't take the credit for writing it; Dave Newbold did the hard work on this one.
Enterprise Search is still considered as a one-time IT Project in most cases, although it should rather be a Business Process, with well-defined lifecycle, metrics and analytics. Measuring and controlling success is very challenging – in most cases, it has to be a string collaboration between internal and external experts. In this session, I’m introducing several roles of this process as well as useful metrics and best practices. Attendees will get a practical plan for quality management of their Enterprise Search solution as the key takeaway.
Text Classification Powered by Apache Mahout and Lucenelucenerevolution
Presented by Isabel Drost-Fromm, Software Developer, Apache Software Foundation/Nokia Gate 5 GmbH at Lucene/Solr Revolution 2013 Dublin
Text classification automates the task of filing documents into pre-defined categories based on a set of example documents. The first step in automating classification is to transform the documents to feature vectors. Though this step is highly domain specific Apache Mahout provides you with a lot of easy to use tooling to help you get started, most of which relies heavily on Apache Lucene for analysis, tokenisation and filtering. This session shows how to use facetting to quickly get an understanding of the fields in your document. It will walk you through the steps necessary to convert your text documents into feature vectors that Mahout classifiers can use including a few anecdotes on drafting domain specific features.
Configure
Presented by Markus Klose, Search + Big Data Consultant SHI Elektronische Medien GmbH at Lucene/Solr Revolution 2013 Dublin
Kibana4Solr is search-driven, scalable, browser based and extremely user friendly (also for non-technical users). Logs are everywhere. Any device, system or human can potentially produce a huge amount of information saved in logs. The amount of available logs and their semi-structured nature make a meaningful processing in real-time quite a difficult task. Thus, valuable business insights stored in logs might be not found. Kibana4Solr is a search-driven approach to handle that challenge. It offers user-friendly and browser-based dashboard which can be easily customized to particular needs. In the session the Kibana4Solr will be introduced. Some light will be shed on the architectural features of Kibana4Solr. Some ideas will be given in terms of possible business uses cases. And finally a live demo of Kibana4Solr will be shown.
Configure
Building Client-side Search Applications with Solrlucenerevolution
Presented by Daniel Beach, Search Application Developer, OpenSource Connections
Solr is a powerful search engine, but creating a custom user interface can be daunting. In this fast paced session I will present an overview of how to implement a client-side search application using Solr. Using open-source frameworks like SpyGlass (to be released in September) can be a powerful way to jumpstart your development by giving you out-of-the box results views with support for faceting, autocomplete, and detail views. During this talk I will also demonstrate how we have built and deployed lightweight applications that are able to be performant under large user loads, with minimal server resources.
Integrate Solr with real-time stream processing applicationslucenerevolution
Presented by Timothy Potter, Founder, Text Centrix
Storm is a real-time distributed computation system used to process massive streams of data. Many organizations are turning to technologies like Storm to complement batch-oriented big data technologies, such as Hadoop, to deliver time-sensitive analytics at scale. This talk introduces on an emerging architectural pattern of integrating Solr and Storm to process big data in real time. There are a number of natural integration points between Solr and Storm, such as populating a Solr index or supplying data to Storm using Solr’s real-time get support. In this session, Timothy will cover the basic concepts of Storm, such as spouts and bolts. He’ll then provide examples of how to integrate Solr into Storm to perform large-scale indexing in near real-time. In addition, we'll see how to embed Solr in a Storm bolt to match incoming tuples against pre-configured queries, commonly known as percolator. Attendees will come away from this presentation with a good introduction to stream processing technologies and several real-world use cases of how to integrate Solr with Storm.
Configure your Solr cluster to handle hundreds of millions of documents without even noticing, handle queries in milliseconds, use Near Real Time indexing and searching with document versioning. Scale your cluster both horizontally and vertically by using shards and replicas. In this session you'll learn how to make your indexing process blazing fast and make your queries efficient even with large amounts of data in your collections. You'll also see how to optimize your queries to leverage caches as much as your deployment allows and how to observe your cluster with Solr administration panel, JMX, and third party tools. Finally, learn how to make changes to already deployed collections —split their shards and alter their schema by using Solr API.
Presented by Rafal Kuć, Consultant and Software engineer, , Sematext Group, Inc.
Even though Solr can run without causing any troubles for long periods of time it is very important to monitor and understand what is happening in your cluster. In this session you will learn how to use various tools to monitor how Solr is behaving at a high level, but also on Lucene, JVM, and operating system level. You'll see how to react to what you see and how to make changes to configuration, index structure and shards layout using Solr API. We will also discuss different performance metrics to which you ought to pay extra attention. Finally, you'll learn what to do when things go awry - we will share a few examples of troubleshooting and then dissect what was wrong and what had to be done to make things work again.
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiledlucenerevolution
In a recent project with the United States Patent and Trademark Office, Opensource Connections was asked to prototype the next generation of patent search - using Solr and Lucene. An important aspect of this project was the implementation of BRS, a specialized search syntax used by patent examiners during the examination process. In this fast paced session we will relate our experiences and describe how we used a combination of Parboiled (a Parser Expression Grammar [PEG] parser), Lucene Queries and SpanQueries, and an extension of Solr's QParserPlugin to build BRS search functionality in Solr. First we will characterize the patent search problem and then define the BRS syntax itself. We will then introduce the Parboiled parser and discuss various considerations that one must make when designing a syntax parser. Following this we will describe the methodology used to implement the search functionality in Lucene/Solr. Finally, we will include an overview our syntactic and semantic testing strategies. The audience will leave this session with an understanding of how Solr, Lucene, and Parboiled may be used to implement their own custom search parser.
Many of us tend to hate or simply ignore logs, and rightfully so: they’re typically hard to find, difficult to handle, and are cryptic to the human eye. But can we make logs more valuable and more usable if we index them in Solr, so we can search and run real-time statistics on them? Indeed we can, and in this session you’ll learn how to make that happen. In the first part of the session we’ll explain why centralized logging is important, what valuable information one can extract from logs, and we’ll introduce the leading tools from the logging ecosystems everyone should be aware of - from syslog and log4j to LogStash and Flume. In the second part we’ll teach you how to use these tools in tandem with Solr. We’ll show how to use Solr in a SolrCloud setup to index large volumes of logs continuously and efficiently. Then, we'll look at how to scale the Solr cluster as your data volume grows. Finally, we'll see how you can parse your unstructured logs and convert them to nicely structured Solr documents suitable for analytical queries.
Real-time Inverted Search in the Cloud Using Lucene and Stormlucenerevolution
Building real-time notification systems is often limited to basic filtering and pattern matching against incoming records. Allowing users to query incoming documents using Solr's full range of capabilities is much more powerful. In our environment we needed a way to allow for tens of thousands of such query subscriptions, meaning we needed to find a way to distribute the query processing in the cloud. By creating in-memory Lucene indices from our Solr configuration, we were able to parallelize our queries across our cluster. To achieve this distribution, we wrapped the processing in a Storm topology to provide a flexible way to scale and manage our infrastructure. This presentation will describe our experiences creating this distributed, real-time inverted search notification framework.
Solr's Admin UI - Where does the data come from?lucenerevolution
Like many Web-Applications in the past, the Solr Admin UI up until 4.0 was entirely server based. It used separate code on the server to generate their Dashboards, Overviews and Statistics. All that code had to be maintained and still ... you weren't really able to use that kind of data for the things you needed it for. It was wrapped into HTML, most of the time difficult to extract and changed the structure from time to time w/o announcement. After a short look back, we're going to look into the current state of the Solr Admin UI - a client-side application, running completely in your browser. We'll see how it works, where it gets its data from and how you can get the very same data and wire that into your own custom applications, dashboards and/oder monitoring systems.
Steve will show how and why to use Solr’s new Schemaless Mode, under which document indexing can be performed with no up-front schema configuration. Solr uses content clues to choose among a predefined set of field types and then automatically add previously unseen fields to the schema.
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
Presented by Renaud Delbru, Co-Founder, SindiceTech
In this presentation, we will discuss how Lucene and Solr can be used for very efficient search of tree-shaped schemaless document, e.g. JSON or XML, and can be then made to address both graph and relational data search. We will discuss the capabilities of SIREn, a Lucene/Solr plugin we have developed to deal with huge collections of tree-shaped schemaless documents, and how SIREn is built using Lucene extensibility capabilities (Analysis, Codec, Flexible Query Parser). We will compare it with Lucene's BlockJoin Query API in nested schemaless data intensive scenarios. We will then go through use cases that show how relational or graph data can be turned into JSON documents using Hadoop and Pig, and how this can be used in conjunction with SIREn to create relational faceting systems with unprecedented performance. Take-away lessons from this session will be awareness about using Lucene/Solr and Hadoop for relational and graph data search, as well as the awareness that it is now possible to have relational faceted browsers with sub-second response time on commodity hardware.
Text Classification with Lucene/Solr, Apache Hadoop and LibSVMlucenerevolution
In this session we will show how to build a text classifier using the Apache Lucene/Solr with libSVM libraries. We classify our corpus of job offers into a number of predefined categories. Each indexed document (a job offer) then belongs to zero, one or more categories. Known machine learning techniques for text classification include naïve bayes model, logistic regression, neural network, support vector machine (SVM), etc. We use Lucene/Solr to construct the features vector. Then we use the libsvm library known as the reference implementation of the SVM model to classify the document. We construct as many one-vs-all svm classifiers as there are classes in our setting, then using the Hadoop MapReduce Framework we reconcile the result of our classifiers. The end result is a scalable multi-class classifier. Finally we outline how the classifier is used to enrich basic solr keyword search.
Faceted search is a powerful technique to let users easily navigate the search results. It can also be used to develop rich user interfaces, which give an analyst quick insights about the documents space. In this session I will introduce the Facets module, how to use it, under-the-hood details as well as optimizations and best practices. I will also describe advanced faceted search capabilities with Lucene Facets.
Presented by Shai Erera, Researcher, IBM
Lucene's arsenal has recently expanded to include two new modules: Index Sorting and Replication. Index sorting lets you keep an index consistently sorted based on some criteria (e.g. modification date). This allows for efficient search early-termination as well as achieve better index compression. Index replication lets you replicate a search index to achieve high-availability, fault tolerance as well as take hot index backups. In this talk we will introduce these modules, discuss implementation and design details as well as best practices.
As part of their work with large media monitoring companies, Flax has developed a technique for applying tens of thousands of stored Lucene queries to a document in under a second. We'll talk about how we built intelligent filters to reduce the number of actual queries applied and how we extended Lucene to extract the exact hit positions of matches, the challenges of implementation, and how it can be used, including applications that monitor hundreds of thousands of news stories every day.
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...lucenerevolution
Presented by Xavier Sanchez Loro, Ph.D, Trovit Search SL
This session aims to explain the implementation and use case for spellchecking in Trovit search engine. Trovit is a classified ads search engine supporting several different sites, one for each on country and vertical. Our search engine supports multiple indexes in multiple languages, each with several millions of indexed ads. Those indexes are segmented in several different sites depending on the type of ads (homes, cars, rentals, products, jobs and deals). We have developed a multi-language spellchecking system using solr and lucene in order to help our users to better find the desired ads and avoid the dreaded 0 results as much as possible. As such our goal is not pure orthographic correction, but also suggestion of correct searches for a certain site.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Beyond simple search – adding business value in the enterprise
1. Beyond Simple Search – Adding Business Value in the Enterprise
Kathlina (Kathy) M. Phillips
Vice President, Technology Manager Enterprise Search Services (ESS)
Tom Lutmer
eBusiness Systems Consultant, Enterprise Search Services (ESS)
3. Agenda
Who are We? What Do We Do?
Search Architecture
Beyond Simple Search
Search Applications – Value & Techniques
Look to the Future
Q & A
2
4. Our Intranet – Served by ESS
265,000 team members
(potential users – all time zones)
Enterprise Search Services (ESS)
2+ million unstructured docs
20+ million structured content
1300+ domains
10,000+ websites
SharePoint
OpenText
Documentum
Websphere, Cold Fusion
Blogs, Wikis, Social Spaces
.NET, ASP, PHP, JSP, etc..
2+ million queries/month
5. Search Business Value for Wells Fargo
Enterprise Scoped
Search
Site
Specific
Site
Specific
Site
Specific
Site
Specific
Customer
Impact
Customer
Impact
Business
Analysis
Business
Intelligence
Enterprise:
Time savings and efficiency
Reduce rework and duplication
Timely and updated communications
Collaboration and knowledge sharing
Site Specific:
Timely access to notifications, forms,
group communications
Knowledge base applications
Customer Impact:
Customer support
Timely access to notifications,
forms, processes, procedures
Knowledge base applications
Business Analysis:
Deeper level analysis
Results only relevant in
context of application
Structured and unstructured
content
Business Intelligence:
Connects relationship of data
Results only relevant in context of application
Structured and unstructured content
Wells Fargo Intranet
6. Enterprise Search Web Services (JSON, XML, HTML)
Internal Service – able to switch to results from different search engines (not dependent
on any one search solution)
Best Bets / Autocomplete Admin Interface/Metrics
View /Query Server
Management
Search Architecture
FAST Web Crawl
Database
Connectors
LucidWorks (Lucene/Solr) Search
Hosted Search Apps
Custom
Search Apps
Intranet Websites
Search Apps using Web
Service XML
Search Apps using Web
Service Json
OpenText
Connector
Enterprise Search Web Services
FAST ESP Search Optional Other Search
LucidWorks
Connectors
Other Custom
Connectors
10. Enterprise Scope Applications
Enterprise Scope - Typical keyword intranet search; access, find, retrieve information across a
variety of web sites
Challenges: Crawling, Access, Noise in Results, Poor/Inconsistent Quality Content
Techniques: Removal of content, Scripting to improve quality/normalize, Metrics to verify
depth/scope, autocomplete, social feedback (click through, best bets, tagging)
CrawlerRecreate HTML for
Single Sign-on Pages
SharePoint
Connector
Full Crawl
Specific Sites
2 Hop Crawl
All “Published” Sites
Scripting
For Meta Data
Scripting
For Meta Data
Scripting
For Meta Data
Index
Example Crawl Configuration for One Enterprise Scope App
11. Internet vs. intranet search results
10
Internet:
Paying Customers
Intranet:
Co-workers
Higher quality content in
top results
Tuned results by working
directly with search solution
(paid for tuning)
Mostly HTML/web pages
Searches usually tuned
for mass appeal (popular
searches)
Lower quality content
overall
Quality of content varies
widely
Larger variety of content
types
Searches vary between
popular mass appeal and
many very specific to
current task
12. Crawling Challenges
High Quality Content
with Good Metadata
Web
Crawler
Low Quality Content with
Bad/Poor Metadata
Missing Body Content
– JavaScript Built or
Browser Dependent
Duplicates – Domain Name,
Dynamic Scripts, Published
Multiple Times,
Upper/Lowercase
Crawl Rates, Depth, Link
Following Methods
Authentication –
Custom, Incorrect,
Single Sign-on
Proxies, Firewalls,
Robots
13. Post Processing and Scripting
Scripting
Metadata Augmentation
Transformation
Tables/Matching
Text Extraction
Rules / Regex
Code / Logic (Complex/Unique)
Content Removal
Merge/Copy Metadata
ScriptPreprocessorUpdateController
Index
DataSource
Script File (*.js) Update Handler
Do Not Index
14. Site Specific – Self Service
Copy code to include on their site
Site Specific:
Keyword
intranet search
for a smaller
scoped set of
content or
single website
Technique:
Self Service
16. Customer Impact Applications
Customer Impact – Keyword intranet search with interactivity around a specific
business function
• Customer support
• Timely access to notifications, forms, group communications
• Knowledge base applications
Challenges & Techniques:
Security and Performance; Custom User Interfaces and Metadata
Content & ACLs
Database with ACL
mapping
Content Acquisition
Enterprise Search Web Service
Websites
Query/Index
Content – no ACLs
Security at
Website: all
or nothing
Authentication
& Match ACL at
Query
Lock direct
access to
Solr or other
User Group
ACL Caching
Content may/may
not include ACLs at
acquisition time
Security Architecture
17. Business Analysis Applications
Business Analysis – specialty search solutions for deeper level analysis
Search App
Phonetic Libraries
(Apache Codec)
Content Index
(Lucene)
Thesaurus Index
(Lucene)
Web App
Results
Businesses with MN or
Minnesota will show
up
Results match MN
or Minnesota
18. Business Intelligence Applications
Business Intelligence:
search across structured
and unstructured data
sources for discovery and
reporting
Once search results are
returned these sliders
can be used to filter to
specific results.
• Companies with FICO
>750 and
• Gross annual sales >
$2 million
• In Scottsdale
19. Where are We Headed?
TO DO:
Social tags
Best Bets
Integrating Click Through
Metrics, metrics, metrics
Clustering
Semantics
Big Data
18
Trending:
Enterprise Search – “gateway” to
search apps
Site Search/Embedded search –
Value Add Rising
Business Intelligence – Value
Add Rising
Quality Audits & Metrics to show
value
Social/Logs/Feedback for
relevancy & personalization
New User Interfaces – mobile,
interactive, embedded