SlideShare a Scribd company logo
1 of 11
Spam Clustering Using Wave Oriented K Means <br />Claudiu Cristian MUSAT<br />Ionut GRIGORESCU<br />Alexandru TRIFAN<br />Carmen MITRICA<br />BitDefender<br />Bucharest, Romania 062204<br />Abstract<br />We introduce a system designed to replace older spam signature extraction algorithms, that relied heavily on human intervention. Starting from a given spam flow and a set of relevant features, we will generate descriptions of the previously unknown messages and offer statistics relative to spam dynamics.<br />The novelty of our approach comes from the way we use constrained k-means in spam fingerprinting, along with a dual purpose spam history retaining mechanism. We will show that the method is viable in a highly volatile and fragmented spam environment.<br />Introduction<br />The debate about the antispam community’s ability to find good proactive spam filtering methods is on, but until an ultimate proactive solution is found, reactive methods will probably still be in use. In the fight between spammers and filters, one of the most important drawbacks of reactive filters is the fact that by the time they were retrained to face a new threat, they might already be obsolete, and the threat long gone. <br />Special attention must be paid to the speed of the training system, which in our opinion is a crucial factor in designing a robust mechanism. In terms of speed, given all other factors are constant, an obvious way to reduce the training time of a reactive filter is to reduce the amount of information processed. <br />Also, another probable bottleneck is the human intervention, thus making highly automated processes more reliable and desirable.<br />Of course, speed is not the only factor. Other requirements a good spam filtering system must meet are that it should be able to adapt to changes and classify an email into spam/ham with a reasonable accuracy.<br />Bearing these in mind, we will address the problem of creating a system that can:<br />Filter unwanted/unnecessary data<br />Determine known/unknown spam types<br />Determine the number of unknown spam classes<br />Classify given examples<br />Update its knowledge base<br />Provide useful statistics<br />Do all the above with little or no human intervention.<br />Our solution to the requirements above consists of a modified version of constrained k-means (Wagstaff&Cardie, 2001). Starting from a slice of our daily spam flow, the system will filter out known (detectable) spam, classify the remaining items into  a variable number of classes and update its knowledge base. All test emails similar to the ones in the training phase will be labeled accordingly. Afterwards, there are many useful statistics that can be extracted and used to improve that knowledge base - similar spam waves, obsolete signatures, major outbreaks (Musat, 2006), etc. This latter part is usually the most time consuming for the human supervisor, but it is not a crucial process. It can be run without affecting the performance of the core - training phase (e.g. it can be parallelized)<br />An important thing to mention is that the system is not designed to classify a given corpus of spam or ham emails; it is designed to handle a constant spam flow.<br />Additionally, attempts have been made to cluster spam in order to determine botnet distribution. Phil Tom (Tom, 2008) concluded that although it is highly difficult to cluster spam altogether, just by clustering by subject and received headers, successful results can be attained – accurately detecting botnets. A great problem would be that the clustering had been done offline in laboratory conditions, and there are numerous changes when attempting to cluster spam in the wild. This too would be a benefit of accurately clustering spam online.<br />Proposed Method<br />Overview<br />We start from our daily spam flow - one can assume that nearly every antispam provider has one. As clustering algorithms only work with a finite number of examples to classify, we will need to take a snapshot of our flow, and store it as the training corpus. <br />Afterwards the training knowledge base must be loaded. It can be regarded as a clustering history, since it keeps track of all the training and testing operations and results.<br />We then need to filter out already known messages, in order to reduce the computational burden. Regardless of the chosen method, given that the chances it will be O(1) are neglijable, any reduction in the number of messages will trigger at least a directly proportional reduction in the total necessary time. Determining what the quot;
knownquot;
 messages are means seeing whether the given email is similar to previously learned ones.<br />When all the training data is in order, the training phase itself can begin. Conceptually - the training of an agglomerative clustering engine means grouping the learning corpus into classes of similar examples. Thus in our case the outcome of this phase is a set of email classes.<br />These new classes, along with their representatives (e.g. the representative for a sphere in a 3D environment might be its center) become the exponents of new spam types, and will be inserted into the knowledge base. All the statistics are based on this constantly updated knowledge base.<br />Before describing the training algorithm itself, we must introduce both the data and the concepts used - from email features and similarities to clusters and training history.<br />The Training History<br />We stated earlier that our approach relies heavily on previously extracted knowledge, so we need to conserve that information, which will then be stored in the form of a cluster history. <br />Spam Types<br />Most of the statements in the system's mission - as presented in the introduction - revolve around the idea that spam can conceptually be divided into distinct classes. The classes in this case are more specific than just  a financial/pharmaceutical/mortgage/etc. division. They represent any group of spam emails that share a number of features.<br />But that statement above can be valid only accompanied by a measure of similarity between the items - in our case emails. <br />We decided to express similarity as a distance in a homogenous feature space. Thus, in order to decide if two emails are alike, we must first determine each email's representation in the feature space, and then compute the distance between the resulting points.<br />Spam Features<br />We start from a pool of selected generic features ranging from the message size to its layout string (Musat, 2006). The features we employ are designed to extract as much information as possible from both the headers and the body of a message. <br />We divide these features into two classes using the following criterion: given two values of the feature, can the average be computed? The size of the message, or any other numeric feature, is a good example of a feature with a computable average. The message ID, or any other string feature encoding some message traits are an example of features in the latter class (i.e. features without a computable quot;
meanquot;
).<br />In order to manipulate emails as whole entities we must provide descriptions of their traits in a homogenous way, including both feature classes above. For this purpose we define a pair of feature spaces – hyperspaces in essence, HSm and  HSn, where HSn,  includes features with a computable average, and HSm  includes the others.<br />The distance functions in these feature spaces vary depending on the feature in question. For the numeric features the  most straight forward choices are the Manhattan and the Euclidean distances, whereas for string encoding features the natural choice would be the Levenshtein or edit distance.<br />Our goal is to link the representatives of the spam classes to regions of the combined HSm+n  feature space. The regions must be chosen so that they include most - if possible all - of the changes that can appear in a given class. Assuming that a region's representative is a single central point, we must look for the point in space whose distance to all of the class's elements is minimal. The delimited region in space equivalent to one class of elements is called a cluster, and its representative point - its center.<br />We hereby introduced the notion of distance in the combined feature space HSm+n. For the simple case of objects represented by low or medium dimensional feature vectors, the similarity between two objects or clusters is typically defined by an appropriate distance function of their centers in the feature space HSm+n (e.g. Euclidean distance or Manhattan distance). <br />A time saving observation is that the layout string as well as the MIME part string vary little within the same cluster. Thus the projection of a cluster in the HSm space is concentrated in a small volume around its virtual mean. This observation leads to the fact that we can assess the HSm mean using a small number of examples. This number, p for instance, can be linked to the level of trust T we have in the feature (which can be the score of a feature selection algorithm (Guyon, 2006)). The higher the trust, the smaller the number of elements needed to accurately determine the cluster average - p≈1T.<br />Cluster History<br />We chose the cluster as the central element of our design since all our efforts revolve around separating and manipulating each spam class as a distinct item.<br />Spam evolves over time - usually by either reusing an older template, or by creating a new one altogether. In the first case, if the differences between the new spam wave and the old one are not major, the goal is to include both in the same cluster, since they have a great deal in common. But if the differences are capital, we can consider the latter wave as a new one. In this case since we have no previous information, the best thing to do is to examine it as soon as possible and add it to the knowledge base.<br />Thus the first advantage that comes with using a cluster history is the fact that at each point we only need to consider new spam types - or at least ones with a high degree of innovation.<br />The second reason we had for using a type of repository is that the algorithm is conceptually divided into a training phase and a test or a usage phase. We need to access the training phase information to be able to properly use the algorithm in its test phase.<br />Last but not least we need an information repository for providing usage statistics about the evolution of spam of the usefulness of certain clusters or features.<br />Data Division and Filtering<br />We mentioned in the introduction that the data used is part of our daily spam flow, and that we needed to take a snapshot of our flow since we can only use a finite number of examples at a given time. We also introduced the necessity to filter out as many useless examples as possible to greatly enhance the system speed.<br />We define a snapshot of the flow as the messages that arrive between two moments in time, t1and t2, also known as a time window. Since usually the spam flow contains many duplicate messages, it is highly indicated to add a filtering stage, to reduce the number of messages that carry little or no additional information.<br />A logical question is why not reduce the amount of processed items by narrowing the time window in the first step? That would indeed lower the number of emails, but it would also diminish the number of email types. Although this might not be obvious at first glance, consider the problem from a probabilistic point of view. At the extreme, if we reduce the corpus to a last remaining message, it will surely be of a single type.<br />What we want in this stage is to have as few as possible email examples, while at the same time as many as possible new types of emails represented. We define the old spam types as the spam we know and catch, so we rule these out. Also, we tried to eliminate email quot;
duplicatesquot;
 - very similar examples in terms of bytes changed - but it proved of little use. This failure comes from the fact that we did not want to relax the definition of a duplicate. Had we done so, we would have used some other feature or set of features to compare emails, and we would have lost these features from the training phase point of view.<br />We thus have a tradeoff between the number of features available for the training phase and the number of unneeded messages that can be filtered prior to that phase. We could also express this dilemma as a tradeoff between the accuracy (the higher the number of features - the better the accuracy) and the speed (the less examples - the higher the speed) of the system.<br />The result of this initial phase is thus a collection of spam messages previously undetected by the system - not similar to the spam types in the knowledge base.<br />Clustering techniques<br />Stream clustering<br />Since the task of classifying the incoming spam messages into spam types is in essence a stream clustering, we paid special attention to well known methods such as DenStream (Cao et al. 2006) or CluStream (Aggarwal et al., 2003) as well as stream oriented flavors of the k-means such as VFKM(Domingos et al. 2001) or RA-VFKM(Shah et al., 2004). The main difference between our method and the previously mentioned is the fact that it is wave oriented. This means we are looking for a method best suited for detecting spam waves, that would correctly isolate a wave's constant aspects and disregard the ephemeral intra-wave changes. Let us begin by examining the reasons we did not choose one of the already mentioned methods.<br />There is a feature most stream clustering methods share - they maintain a fixed number of clusters (or micro-clusters accordingly). We could mention CluStream (Aggarwal et al., 2003) , VFKM (Very Fast KMeans) (Domingos et al. 2001)or RA-VFKM (Resource Aware Very Fast KMeans) (Shah et al., 2004) among these methods. There are two reasons why this practice is less desirable. The first is that we lose accuracy being forced to either merge non similar clusters or split larger ones. The second is that in a highly noisy environment it tends to create clusters from the noise data, having to merge or delete important clusters.<br />There is no question spam is a noisy environment. In fact, one of the most frequent cases is that of a solitary message or a very small group of messages that have little or nothing in common with others. Furthermore, our observations coincide with Tom's (Tom, 2008) in the fact that there are very few medium sized spam campaigns - most messages come either in million strong waves or almost solitary. And from the perspective of clusters having thousands of messages, we could consider solitary spam noise. <br />But there are other factors that  need to be considered. There is a particular property of the way spam is created that makes DenStream (Cao et al. 2006)  for instance inappropriate  for the given task - the fact that spam usually comes in waves, meaning that groups of similar spam messages usually come at relatively the same time. This makes a exponentially decreasing weight function both time consuming and unfit for our case. <br />Another potential problem with DenStream's cluster maintenance is the fact that each new message automatically modifies  older centers' coordinates - that makes it risky in a noisy environment. Also, we have spam types that last for months - so we would want to keep the corresponding centers intact for a longer period regardless of the fact other new ones might appear in their vicinity.<br />If given sufficient features, the data in the feature space becomes sparse. Thus we did not see it necessary to have more than one representative per cluster, thus giving birth to spherical clusters.<br />Other methods – DenStream in particular pay special attention to the shape of the clusters, sacrificing speed. We should not forget that speed is a crucial factor when dealing with thousands of evolving spam types and flavors.<br />We can summarize by stating that we need a stream clustering method was needed that makes no assumptions about the number of clusters and uses a minimum of information to store the clusters, and has the capacity to manipulate large numbers of clusters.<br />K-Means<br />The k-means algorithm, introduced by MacQueen(1967),  is by far the most popular clustering tool used in scientific and industrial applications. Having an example pool M, the aim of the algorithm is to classify all the examples into k example classes, or clusters Ci. The name comes from representing each of k clusters Ci in the feature (in our case HSm+n) space by the mean (or weighted average) ci of its points, the so-called centroid.<br />These centroids should be placed carefully because different starting locations generate different results. Therefore, a good choice is to place them as far away from each other as possible. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. When no more points are pending, the first step is completed and an early grouping is done. At this point we need to re-compute k new centroids as mass centers of the clusters resulting from the previous step. After we are in possession of the k new centroids, a new binding has to be done between the same training points and the new set of centroids. Thus, a loop has been generated, as a result of which we may notice that the k centroids change their location step by step. The algorithm stops when no more changes are made – two consecutive centroid sets are identical.<br />Constrained K-Means<br />Constrained kMeans (Wagstaff&Cardie, 2001), is a semi-supervised version of kMeans. Using constraints ranging from the simple “must fit” or “must not fit” to more complex ones like the ε and δ constraints (Davidson et al. 2005), targeted constrained kMeans versions have been shown to outperform the original algorithm at specific tasks. As previously said, Wave Oriented KMeans –WOKM falls into this category as well, using a constraint which we will name θ.<br />The constraint θ used in the WOKM is in fact a minimum message to history distance. Given the clusters’ centers H in the knowledge base and the example pool M, we store for each training message xi∈M the minimum of the distances from itself to the history centers Hj:<br />dH,xi=mindxi,Hj, ∀Hj∈H<br />During the training phase of the algorithm, we use the dH,xi distance as a constraint. <br />Every iteration, as the clusters evolve, there is a current cluster collection, CC. We define the distance between a given example xi and the current cluster collection as<br />dCC,xi=mindxi,CCj, ∀CCj∈CC<br />The maximum distance between the example xi and the current center collection, dCC,xi must be at most dH,xi, or<br />dCC,xi < dH,xi  ,   ∀xi∈M<br />The WOKM θ resembles the ε constraint (Davidson et al. 2005), or the ε-minpts constraint in DBSCAN (Ester et al. 1996). The main difference between θ and the rest is the fact that it denotes a distance between an example xi and cluster centers, while the other two emphasize distance between different examples. We can use cluster centers in the definition of the constraint because we have a priori knowledge concerning the way the spam space HSm+n is organized, information stored in the cluster history.  <br />The complexity of the problem using the θ constraint, and having n training examples and m clusters in the training history is O(m*n).<br />But introducing the θ constraint also creates a whole new set of problems. As previously said, an example can only be part of clusters whose centers are closer to it than its maximum history distance dH,xi. This means that, given an initial arrangement of the k current clusters, odds are that some (even many) examples will be left unclassified – or unassigned. Thus, if we do not accompany this change with a complementary one, we would not be able to regenerate the clusters’ centers in the traditional kMeans fashion, as the mean of the clusters.<br />Adaptive number of clusters<br />The change we were referring to is dynamically changing the number of clusters. Let M’ be the set of messages that were not classified using the current cluster collection. <br />When an example is left unclassified, that signals that in that region of space there might be a need for an additional cluster. Thus, without modifying the existent ones, we add new clusters to the cluster collection. <br />We store the old current clusters, and restart the assigning process with M’ as the new initial message collection. The new current cluster collection will be a subset of M’, disjoint from the old – now stored – cluster collection. Afterwards we assign the elements of M’ to the new clusters. If all the mails are now assigned, stop. Otherwise, we repeat the process storing the new cluster centers that result from each iteration.<br />Although the clusters that result from this phase are not overlapping when computed, they might become so when their centers are recomputed according to the initial kMeans design (MacQueen 1967). This can be corrected if necessary by merging adjacent ones.<br />We must note that by actively modifying the number of clusters in WOKM to fit the spam environment, we solved one of the most important problems of the original kMeans design – the need for a priori knowledge. Having a self regulating number of clusters greatly improves the effectiveness and applicability of WOKM.<br />The Triangle Inequality<br />Although we considered the fixed number of clusters to be the most important problem with the original kMeans classifier in the spam problem, speed is a crucial factor as well. We want the process of classifying hundreds of thousands of spam messages to be as close to real time as possible. Thus we need an important speed optimization. We chose Elkan’s triangle inequality (Elkan, 2003).<br />Elkan developed a generic method of using the triangle inequality to accelerate k-means. The idea is to avoid unnecessary distance calculations, which in our case is crucial, since an important part of our features are string features and the corresponding Levenshtein distances take longer to compute. The way this is achieved relies on comparing the distances between a given email and its known class representative with the distances between different class representatives. If other cluster centers are known to be too far away from the email, the distances between them are no longer necessary. We slightly modified Elkan’s formulae to include the θ constraint.<br />The basic idea is that for any three points x, y, z,  dx,z<dx,y+dy,z . Let x be the current point and c1 and c2 two centers belonging to the current cluster collection. Let ux be the upper bound on the distance between x and the center to which x is currently assigned, ux>dx,cx. And let hx=dH,x, the minimum distance between the history cluster centers and x. We define the maximum of the two as uhx=maxux,hx.<br />Also, we need a minimum bound for the distances between x and the centers, so for each c'∈CC, let lx,c'≤dx,c'.<br />Now, for some center c'∈CC, other than the center c that x is already assigned to, let us suppose that uhx<lx, c'. That would mean that dx,c<uhx<lx,c'<d(x,x'). This leads us to the certainty that the distance between x and the other center is greater than the distance from x to the center it is already assigned to, so neither d(x,c) nor d(x,c') need to be recomputed.<br />The only remaining problem is updating the contents of uhx and lx,c',∀c'∈CC, which means in fact updating ux and lx,c', since hx is constant. Thus we can use Elkan’s initial update procedure as we will see next.<br />The Algorithm<br />The WOKM system can be split into three conceptually different phases – a training phase – in which new clusters are generated and stored into the knowledge base, a test phase – in which individual email samples are classified to the known spam classes  and post processing – maintenance and statistics. <br />The Training Phase<br />The outcome of the training phase will be a final collection center, C.<br />Preprocessing<br />The training phase starts with a preprocessing part, that initializes all the necessary components. <br />Let M be the message pool. For each training message xi∈M initialize the minimum history distance hx=dH,xi=mindxi,Hj, ∀Hj∈H, and the current center upper bound ux=null.<br />Pick the initial number of centers k, either random if it is the first run, or relative to the number of centers determined at the last run. <br />Then pick an initial k centers from the message pool M, either randomly or by using a maximum separation heuristic. These newly created k centers will form the current cluster collection CC.<br />Determine the distances between the centers, dc1c2, ∀ c1, c2 ∈CC. <br />Initialize the triangle inequality lower bound matrix, ∀c∈CC,  ∀x∈M,  lx,c=0.<br />Let the final collection set contain the same elements as the current collection CC,  C=CC <br />The Main Iteration<br />While a solution has not been found:<br />Unassigned all the given examples,<br />Assign all examples<br />Recompute centers<br />Merge adjacent(similar) clusters<br />Test solution<br />Item Unassignment<br />The item unassignment occurs at the start of each iteration, as we want each example to be reclassified to the closest of the previously recomputed centers, Let M' be the set of unassigned items, initialized with null M'={∅}. For every example x, its center  cx is turned to null and it is added to the set of unassigned items  cx=null;M'=M'∪x, ∀x∈M. <br />Item Assignment<br />Assigning the entire item collection is necessary. We start by assigning the items to the already computed or extracted current centers CC. Usually in the first iterations, but not limited to those, after assigning all the possible items to the current centers, there will remain items too “far away” from the centers to be classified, having uhx>d(x,CC). It is from these items that new centers will be extracted.<br />While  card{M'}≠0<br />If  card(M')≠card(M)  <br />Pick k' new current centers CC from M', where k'k=cardM'cardM and k'<cardM'.<br />Initialize the lower bound ∀c∈CC  ∀x∈M, lx,c=0<br />Determine the distance between the centers dc1,c2,∀c1,c2∈CC  <br />For each x∈M'<br />Let cx be the center to which x is currently assigned, initially void.<br />For all centers  c∈CC,   c≠cx,   uhx>lx,c,  uhx<12d(cx, c)<br />Compute dx,c<br />If dx,c<d(x, cx)  then <br />cx=c <br />ux=dx,c<br />lx,c=d(x,c).<br />else lx,c=max⁡(lx,c, d(x,c)).<br />If  CC≠∅,  C=CC∪C<br />Recomputing the centers<br />For each center  c∈C<br />Let cn be the projection of the center in the HSn  space that of the features with a computable mean. Set cn as the mean of the equivalent features of the examples classified to c: cn=avgxn<br />Choose a number p of mails that are closest to cn in the HSn space. As specified earlier, p can be smaller than the total number of mails assigned to c.<br />Determine the HSm projection of c, cmusing k-medoids (Park et al.,2006) on the p items chosen above. K-medoids is similar to k-Means, but instead of choosing the centroid as the mean of the cluster elements, it defines the center as the element whose distance to all the others is minimal.<br />The new center of the cluster will be the sum of its projections, c'=cn+cm.<br />∀x∈M'|cx=c,  uhx=uhx+d(c',c)<br />∀x∈M'|cx=c, ∀c∈C, <br />lx,c=max⁡(lx,c-d(c',c), 0)<br />Merging similar clusters<br />Since the mail assigning can only generate new clusters, we must be able to keep the cluster inflation under control. If two centers c1,c2∈C are too close to each other – or if the distance between them is smaller than a threshold t, they will be merged. Thus if dc1,c2<t:<br />,[object Object]
Let M'1/2 be the subsets of M' where the examples in M'1/2 have cx= c1/2respectively
Create a new center c3, whose position is a weighted average of the positions of c1,c2 in HSn:
∀f∈HSn,  c3f=c1f*cardM'1+ c2f*cardM'2cardM'1+cardM'2;
And in HSm it takes the position of the point closest to both centers:∀f∈HSm,   c3f= xminf where<br />,[object Object]
uhx=uhx+d(c1/2,c3 )
lx,c=max⁡(lx,c-dc12,c3 ,0).Solution Testing<br />According to the kMeans design, we consider we found a solution when the clusters generated in two consecutive iterations are identical.<br />C is the set of clusters at the end of the current iteration. At the end of the given iteration, we store C and it becomes the previous cluster collection PC. If c1=c2, ∀c1∈CC and ∀c2∈PC , a solution was found. It is a known fact that kMeans is subject to local minima, thus an optimization might be restarting the algorithm if time permits.<br />Results and statistics<br />Benchmarking<br />In this section we will present results obtained from running the system on live data. There is an important problem that stands in the way of properly benchmarking the method. There are no public spam flows. The only one we are aware of was the CEAS 2007 Spam Challenge, but due to its small size, it is not suitable for the spam clustering task. <br />Public spam corpuses or mail corpuses such as the TREC corpus have one or both of the following problems. One problem is that they mix spam and ham emails, making clustering irrelevant. The other is that even if it contains only spam, it is an offline corpus. We cannot use the system because there would be no training history, and the algorithm would become the traditional k-means. Thus the results in this case would hardly be worth mentioning.<br />Workload<br />The test system was calibrated to run on a workload of roughly 60.000 emails per hour. Although totally new spam adds up to only 2% of the total volume of spam, due to changes to previous templates, about 6% of the emails are not detected as part of the clusters in the history. The machine we run the tests on is a 2.8GHz Intel dual core PC with 1 GB of memory running Ubuntu 7.7.10.<br />Feature Selection and Accuracy<br />We define WOKM’s accuracy as the percent of the total number of examples that are classified to clusters in which examples similar to them are the majority.<br />As previously said, the speed of the system is highly important - we want to run it on as many emails as possible as soon as possible. All other factors equal, we need to diminish the number of features in order to increase the throughput.<br />The tradeoff between the system’s speed and accuracy comes as no surprise, as more features generally mean better accuracy. But if we neglect the effect the correlations between features have, and we consider all the features of equal importance, we find a potentially interesting result as shown in figure 1. As feature selection is not the goal of this paper, but the fact that adding new features beyond some point is not justified by an equivalent increase in the accuracy is important. It provides a limit beyond which the present method can hardly go. We can conclude that on our spam flow the most probable accuracy is 97.8%, given that we do use sufficient features (more than t=18  equally important features. <br />Figure 1: WOKM accuracy depending on the number of features.<br />Data Division<br />The accuracy of the method also depends on the frequency with which data is collected. We can choose to set the time window for 10 seconds or one hour. In the first case, we would probably fragment all the clusters and in the latter the results might be issued too late for them to matter. A value in between should be chosen to diminish both problems as much as possible. Our experience showed is that since usually in the first 10-15 minutes all the flavors of a given spam burst appear, there is no reason to extend the time window any further.<br />Useful Statistics <br />One of the requirements of the system was to provide useful statistics. From our point of view, one such statistic is the percent of novelty in the received spam. To determine that we only need to look at the percent of spam marked as belonging to a known cluster (one that is part of the history). The rest of the emails, those not assigned, contain some degree of novelty. Given the features we provided the WOKM, we found that from the spam we receive, about 6 per cent have a high degree of novelty.<br />Figure 2: Percentages of spam that contain novelties.<br />Among the statistics that the present model facilitates we could also name the separation statistics per feature, which might act as a feature selection mechanism. <br />Conclusions<br />We presented a system that can cluster a flow of massive proportions in real time. The main difference between this model and previous ones is the presence of a clustering history that brings several advantages – a variable number of clusters, a way to determine novel spam types and focus on those clusters alone.<br />Although benchmarking WOKM – Wave Oriented K-Means against other equivalent algorithms is difficult, its accuracy in the preliminary tests and the statistics it facilitates make it a viable solution for industrial spam clustering. The simplicity of the method and the fact that it produces good results with a relatively small number of features also indicate it as a good alternative to other spam clustering methods. <br />Although it is not an antispam method by itself, since it does not separate ham from spam, it can provide an useful edge when used in combination to other generic methods such as Bayesian filters and neural networks (Cosoi, 2006). While the generic filters can keep the overall accuracy above a given threshold, spam fingerprinting can be used to handle the remaining hard examples. And we can hardly imagine spam fingerprinting without clustering.<br />References<br />J. B. MacQueen, (1967). “Some methods for classification and analysis of multivariate observations”. Proceedings of the Fifth Symposium on Math, Statistics, and Probability (p. 281 - 297). Berkeley, CA. University of California Press.<br />K. Wagstaff et al. (2001).”Constrained K-Means Clustering with Background Knowledge”. Proceedings of the Eighteenth International Conference on Machine Learning, 2001, (p. 577-584). Palo Alto, CA. Morgan Kaufmann.<br />A. C. Cosoi (2006). “An Antispam filter based on Adaptive Neural Networks”. Spam Conference 2006, Boston, MA<br />I. Guyon (2006), “Feature Extraction: Foundations and Applications”, Springer 2006.<br />C. C. Aggarwal et al. (2003). A framework for clustering evolving data streams. Proceedings. of VLDB, 2003. Berlin, Germany<br />F. Cao et al. (2006), “Density-Based Clustering over an Evolving Data Stream with Noise”, Proc. 6th SIAM Conference on Data Mining (SDM ‘2006), Bethesda, MD<br />R. Shah et al. (2004) “Resource-Aware Very Fast K-Means for Ubiquitous Data Stream Mining”, 2004<br />P.  Domingos (2001) “A General Method for Scaling Up Machine Learning Algorithms and its Application to Clustering”, Proceedings of ICML 2001 (p106-113), Williamstown, MA,USA. Morgan Kaufmann<br />A. C. Cosoi et al. (2007). “Assigning relevancies to individual features for large patterns in art map networks”. DAAAM International Symposium quot;
Intelligent Manufacturing & Automation: Focus on Creativity, Responsibility, and Ethics of Engineersquot;
, 2007, Zadar, Croatia <br />P. Tom (2008) “Clustering Botnets”. Spam Conference 2008, Boston, MA, USA<br />I. Davidson et al. (2005).” Clustering With Constraints: Feasibility Issues and the k-Means Algorithm” IEEE ICDM 2005<br />C. Musat (2006). Layout Based Spam Filtering. International Journal of Computer Systems Science and Engineering Volume 3 Number 2. ICPA 2006, Vienna, Austria.<br />C Elkan (2003) “Using the Triangle Inequality to Accelerate k -Means”, Proceedings of ICML 2003, Washington, USA<br />M. Ester (1996).” A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise” Proceedings of ICKDDM 1996 <br />H. Park et al.,(2006) “A K-means-like Algorithm for K-medoids Clustering and Its Performance”, Proceedings of ICCIE, 2006, Taipei, Taiwan<br />
Practical Knowledge Representation
Practical Knowledge Representation
Practical Knowledge Representation
Practical Knowledge Representation

More Related Content

What's hot

A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKER
A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKERA STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKER
A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKERJournal For Research
 
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksBallpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksEditor IJCATR
 
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...Daniel Valcarce
 
Stop thinking, start tagging - Tag Semantics emerge from Collaborative Verbosity
Stop thinking, start tagging - Tag Semantics emerge from Collaborative VerbosityStop thinking, start tagging - Tag Semantics emerge from Collaborative Verbosity
Stop thinking, start tagging - Tag Semantics emerge from Collaborative VerbosityInovex GmbH
 
Mills_Metafeatures.doc
Mills_Metafeatures.docMills_Metafeatures.doc
Mills_Metafeatures.docbutest
 
cs6490_project_report_keystroke_dynamics
cs6490_project_report_keystroke_dynamicscs6490_project_report_keystroke_dynamics
cs6490_project_report_keystroke_dynamicsMyungho Jung
 
A Survey: SMS Spam Filtering
A Survey: SMS Spam FilteringA Survey: SMS Spam Filtering
A Survey: SMS Spam Filteringijtsrd
 
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...Emailphishing(deep anti phishnet applying deep neural networks for phishing e...
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...Venkat Projects
 

What's hot (10)

A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKER
A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKERA STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKER
A STATISTICAL APPROACH TO DETECT DENIAL OF SERVICE ATTACKER
 
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless NetworksBallpark Figure Algorithms for Data Broadcast in Wireless Networks
Ballpark Figure Algorithms for Data Broadcast in Wireless Networks
 
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...
 
Stop thinking, start tagging - Tag Semantics emerge from Collaborative Verbosity
Stop thinking, start tagging - Tag Semantics emerge from Collaborative VerbosityStop thinking, start tagging - Tag Semantics emerge from Collaborative Verbosity
Stop thinking, start tagging - Tag Semantics emerge from Collaborative Verbosity
 
Mills_Metafeatures.doc
Mills_Metafeatures.docMills_Metafeatures.doc
Mills_Metafeatures.doc
 
Email spam detection
Email spam detectionEmail spam detection
Email spam detection
 
cs6490_project_report_keystroke_dynamics
cs6490_project_report_keystroke_dynamicscs6490_project_report_keystroke_dynamics
cs6490_project_report_keystroke_dynamics
 
E1062530
E1062530E1062530
E1062530
 
A Survey: SMS Spam Filtering
A Survey: SMS Spam FilteringA Survey: SMS Spam Filtering
A Survey: SMS Spam Filtering
 
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...Emailphishing(deep anti phishnet applying deep neural networks for phishing e...
Emailphishing(deep anti phishnet applying deep neural networks for phishing e...
 

Viewers also liked (13)

Certificate of Completion
Certificate of CompletionCertificate of Completion
Certificate of Completion
 
孤獨與品味 (Rev1)
孤獨與品味 (Rev1)孤獨與品味 (Rev1)
孤獨與品味 (Rev1)
 
resume
resumeresume
resume
 
Joomla seo
Joomla seoJoomla seo
Joomla seo
 
Meegeven presentatie tool keukentafelspel-4
Meegeven presentatie tool keukentafelspel-4Meegeven presentatie tool keukentafelspel-4
Meegeven presentatie tool keukentafelspel-4
 
Emerging issue 2015
Emerging issue 2015Emerging issue 2015
Emerging issue 2015
 
Sutuhadong
SutuhadongSutuhadong
Sutuhadong
 
AeU EBM
AeU EBMAeU EBM
AeU EBM
 
CEO Certificate_Q2_2016
CEO Certificate_Q2_2016CEO Certificate_Q2_2016
CEO Certificate_Q2_2016
 
Resume_Akhilesh
Resume_AkhileshResume_Akhilesh
Resume_Akhilesh
 
Carybé
CarybéCarybé
Carybé
 
2020?
2020?2020?
2020?
 
Flex for php_developers_info_q
Flex for php_developers_info_qFlex for php_developers_info_q
Flex for php_developers_info_q
 

Similar to Practical Knowledge Representation

Study, analysis and formulation of a new method for integrity protection of d...
Study, analysis and formulation of a new method for integrity protection of d...Study, analysis and formulation of a new method for integrity protection of d...
Study, analysis and formulation of a new method for integrity protection of d...ijsrd.com
 
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mail
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mailText Based Fuzzy Clustering Algorithm to Filter Spam E-mail
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mailijsrd.com
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Editor IJCATR
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Editor IJCATR
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Editor IJCATR
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Editor IJCATR
 
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approaches
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning ApproachesA Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approaches
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approachesijtsrd
 
Identification of Spam Emails from Valid Emails by Using Voting
Identification of Spam Emails from Valid Emails by Using VotingIdentification of Spam Emails from Valid Emails by Using Voting
Identification of Spam Emails from Valid Emails by Using VotingEditor IJCATR
 
A Survey on Spam Filtering Methods and Mapreduce with SVM
A Survey on Spam Filtering Methods and Mapreduce with SVMA Survey on Spam Filtering Methods and Mapreduce with SVM
A Survey on Spam Filtering Methods and Mapreduce with SVMIRJET Journal
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERWITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERkevig
 
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERWITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERijnlc
 
14420-Article Text-17938-1-2-20201228.pdf
14420-Article Text-17938-1-2-20201228.pdf14420-Article Text-17938-1-2-20201228.pdf
14420-Article Text-17938-1-2-20201228.pdfMehwishKanwal14
 
Seeds Affinity Propagation Based on Text Clustering
Seeds Affinity Propagation Based on Text ClusteringSeeds Affinity Propagation Based on Text Clustering
Seeds Affinity Propagation Based on Text ClusteringIJRES Journal
 
Differential evolution detection models for SMS spam
Differential evolution detection models for SMS spam  Differential evolution detection models for SMS spam
Differential evolution detection models for SMS spam IJECEIAES
 
Progressive Duplicate Detection
Progressive Duplicate DetectionProgressive Duplicate Detection
Progressive Duplicate Detection1crore projects
 
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...IEEEFINALYEARSTUDENTPROJECTS
 
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...IEEEFINALYEARSTUDENTPROJECT
 

Similar to Practical Knowledge Representation (20)

Study, analysis and formulation of a new method for integrity protection of d...
Study, analysis and formulation of a new method for integrity protection of d...Study, analysis and formulation of a new method for integrity protection of d...
Study, analysis and formulation of a new method for integrity protection of d...
 
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mail
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mailText Based Fuzzy Clustering Algorithm to Filter Spam E-mail
Text Based Fuzzy Clustering Algorithm to Filter Spam E-mail
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
 
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
Spam Detection in Social Networks Using Correlation Based Feature Subset Sele...
 
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approaches
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning ApproachesA Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approaches
A Deep Analysis on Prevailing Spam Mail Filteration Machine Learning Approaches
 
Identification of Spam Emails from Valid Emails by Using Voting
Identification of Spam Emails from Valid Emails by Using VotingIdentification of Spam Emails from Valid Emails by Using Voting
Identification of Spam Emails from Valid Emails by Using Voting
 
Spam email filtering
Spam email filteringSpam email filtering
Spam email filtering
 
A Survey on Spam Filtering Methods and Mapreduce with SVM
A Survey on Spam Filtering Methods and Mapreduce with SVMA Survey on Spam Filtering Methods and Mapreduce with SVM
A Survey on Spam Filtering Methods and Mapreduce with SVM
 
Spam Clustering
Spam ClusteringSpam Clustering
Spam Clustering
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERWITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
 
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSERWITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
 
14420-Article Text-17938-1-2-20201228.pdf
14420-Article Text-17938-1-2-20201228.pdf14420-Article Text-17938-1-2-20201228.pdf
14420-Article Text-17938-1-2-20201228.pdf
 
Seeds Affinity Propagation Based on Text Clustering
Seeds Affinity Propagation Based on Text ClusteringSeeds Affinity Propagation Based on Text Clustering
Seeds Affinity Propagation Based on Text Clustering
 
Differential evolution detection models for SMS spam
Differential evolution detection models for SMS spam  Differential evolution detection models for SMS spam
Differential evolution detection models for SMS spam
 
Progressive Duplicate Detection
Progressive Duplicate DetectionProgressive Duplicate Detection
Progressive Duplicate Detection
 
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...
IEEE 2014 JAVA DATA MINING PROJECTS Discovering emerging topics in social str...
 
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...
2014 IEEE JAVA DATA MINING PROJECT Discovering emerging topics in social stre...
 

More from butest

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEbutest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jacksonbutest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer IIbutest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.docbutest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1butest
 
Facebook
Facebook Facebook
Facebook butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTbutest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docbutest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docbutest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.docbutest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!butest
 

More from butest (20)

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 

Practical Knowledge Representation

  • 1.
  • 2. Let M'1/2 be the subsets of M' where the examples in M'1/2 have cx= c1/2respectively
  • 3. Create a new center c3, whose position is a weighted average of the positions of c1,c2 in HSn:
  • 4. ∀f∈HSn, c3f=c1f*cardM'1+ c2f*cardM'2cardM'1+cardM'2;
  • 5.
  • 7. lx,c=max⁡(lx,c-dc12,c3 ,0).Solution Testing<br />According to the kMeans design, we consider we found a solution when the clusters generated in two consecutive iterations are identical.<br />C is the set of clusters at the end of the current iteration. At the end of the given iteration, we store C and it becomes the previous cluster collection PC. If c1=c2, ∀c1∈CC and ∀c2∈PC , a solution was found. It is a known fact that kMeans is subject to local minima, thus an optimization might be restarting the algorithm if time permits.<br />Results and statistics<br />Benchmarking<br />In this section we will present results obtained from running the system on live data. There is an important problem that stands in the way of properly benchmarking the method. There are no public spam flows. The only one we are aware of was the CEAS 2007 Spam Challenge, but due to its small size, it is not suitable for the spam clustering task. <br />Public spam corpuses or mail corpuses such as the TREC corpus have one or both of the following problems. One problem is that they mix spam and ham emails, making clustering irrelevant. The other is that even if it contains only spam, it is an offline corpus. We cannot use the system because there would be no training history, and the algorithm would become the traditional k-means. Thus the results in this case would hardly be worth mentioning.<br />Workload<br />The test system was calibrated to run on a workload of roughly 60.000 emails per hour. Although totally new spam adds up to only 2% of the total volume of spam, due to changes to previous templates, about 6% of the emails are not detected as part of the clusters in the history. The machine we run the tests on is a 2.8GHz Intel dual core PC with 1 GB of memory running Ubuntu 7.7.10.<br />Feature Selection and Accuracy<br />We define WOKM’s accuracy as the percent of the total number of examples that are classified to clusters in which examples similar to them are the majority.<br />As previously said, the speed of the system is highly important - we want to run it on as many emails as possible as soon as possible. All other factors equal, we need to diminish the number of features in order to increase the throughput.<br />The tradeoff between the system’s speed and accuracy comes as no surprise, as more features generally mean better accuracy. But if we neglect the effect the correlations between features have, and we consider all the features of equal importance, we find a potentially interesting result as shown in figure 1. As feature selection is not the goal of this paper, but the fact that adding new features beyond some point is not justified by an equivalent increase in the accuracy is important. It provides a limit beyond which the present method can hardly go. We can conclude that on our spam flow the most probable accuracy is 97.8%, given that we do use sufficient features (more than t=18 equally important features. <br />Figure 1: WOKM accuracy depending on the number of features.<br />Data Division<br />The accuracy of the method also depends on the frequency with which data is collected. We can choose to set the time window for 10 seconds or one hour. In the first case, we would probably fragment all the clusters and in the latter the results might be issued too late for them to matter. A value in between should be chosen to diminish both problems as much as possible. Our experience showed is that since usually in the first 10-15 minutes all the flavors of a given spam burst appear, there is no reason to extend the time window any further.<br />Useful Statistics <br />One of the requirements of the system was to provide useful statistics. From our point of view, one such statistic is the percent of novelty in the received spam. To determine that we only need to look at the percent of spam marked as belonging to a known cluster (one that is part of the history). The rest of the emails, those not assigned, contain some degree of novelty. Given the features we provided the WOKM, we found that from the spam we receive, about 6 per cent have a high degree of novelty.<br />Figure 2: Percentages of spam that contain novelties.<br />Among the statistics that the present model facilitates we could also name the separation statistics per feature, which might act as a feature selection mechanism. <br />Conclusions<br />We presented a system that can cluster a flow of massive proportions in real time. The main difference between this model and previous ones is the presence of a clustering history that brings several advantages – a variable number of clusters, a way to determine novel spam types and focus on those clusters alone.<br />Although benchmarking WOKM – Wave Oriented K-Means against other equivalent algorithms is difficult, its accuracy in the preliminary tests and the statistics it facilitates make it a viable solution for industrial spam clustering. The simplicity of the method and the fact that it produces good results with a relatively small number of features also indicate it as a good alternative to other spam clustering methods. <br />Although it is not an antispam method by itself, since it does not separate ham from spam, it can provide an useful edge when used in combination to other generic methods such as Bayesian filters and neural networks (Cosoi, 2006). While the generic filters can keep the overall accuracy above a given threshold, spam fingerprinting can be used to handle the remaining hard examples. And we can hardly imagine spam fingerprinting without clustering.<br />References<br />J. B. MacQueen, (1967). “Some methods for classification and analysis of multivariate observations”. Proceedings of the Fifth Symposium on Math, Statistics, and Probability (p. 281 - 297). Berkeley, CA. University of California Press.<br />K. Wagstaff et al. (2001).”Constrained K-Means Clustering with Background Knowledge”. Proceedings of the Eighteenth International Conference on Machine Learning, 2001, (p. 577-584). Palo Alto, CA. Morgan Kaufmann.<br />A. C. Cosoi (2006). “An Antispam filter based on Adaptive Neural Networks”. Spam Conference 2006, Boston, MA<br />I. Guyon (2006), “Feature Extraction: Foundations and Applications”, Springer 2006.<br />C. C. Aggarwal et al. (2003). A framework for clustering evolving data streams. Proceedings. of VLDB, 2003. Berlin, Germany<br />F. Cao et al. (2006), “Density-Based Clustering over an Evolving Data Stream with Noise”, Proc. 6th SIAM Conference on Data Mining (SDM ‘2006), Bethesda, MD<br />R. Shah et al. (2004) “Resource-Aware Very Fast K-Means for Ubiquitous Data Stream Mining”, 2004<br />P. Domingos (2001) “A General Method for Scaling Up Machine Learning Algorithms and its Application to Clustering”, Proceedings of ICML 2001 (p106-113), Williamstown, MA,USA. Morgan Kaufmann<br />A. C. Cosoi et al. (2007). “Assigning relevancies to individual features for large patterns in art map networks”. DAAAM International Symposium quot; Intelligent Manufacturing & Automation: Focus on Creativity, Responsibility, and Ethics of Engineersquot; , 2007, Zadar, Croatia <br />P. Tom (2008) “Clustering Botnets”. Spam Conference 2008, Boston, MA, USA<br />I. Davidson et al. (2005).” Clustering With Constraints: Feasibility Issues and the k-Means Algorithm” IEEE ICDM 2005<br />C. Musat (2006). Layout Based Spam Filtering. International Journal of Computer Systems Science and Engineering Volume 3 Number 2. ICPA 2006, Vienna, Austria.<br />C Elkan (2003) “Using the Triangle Inequality to Accelerate k -Means”, Proceedings of ICML 2003, Washington, USA<br />M. Ester (1996).” A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise” Proceedings of ICKDDM 1996 <br />H. Park et al.,(2006) “A K-means-like Algorithm for K-medoids Clustering and Its Performance”, Proceedings of ICCIE, 2006, Taipei, Taiwan<br />