Web search engines are constantly being developed in order to answer to user needs. This development process focuses not only on lexical pattern matching, but also on processing the sense of the query. There are two ways of doing this. The first is to extract content through Natural Language Processing (NLP); the second is to assign semantic descriptors from controlled languages. Therefore, the technological options available are either free text analysis, or semantic annotation. In the first case human interaction is essential; in the second one the quality of semantic retrieval by means of NLP is still under discussion. Although these solutions represent contrasting positions in the traditional debate on this matter, these methodologies are now mixing. In fact, semantic Web search engines need many pages to be annotated (which requires an enormous effort), so NLP represents an important help for automatic or semi-automatic annotation. At the same time, the precision of text analysis can be optimized by techniques of assignment applied by users and professionals. In conclusion, the trend is the development of collective knowledge systems that improve as more people participate, as they are based on human contributions. All of this will possibly be integrated by chunking, clustering, parsing, spell-checker and other NLP algorithms.