IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Feature Selection for Intrusion Detection System using Support Vector Machines P Indira priyadarsini1, Dr.I Ramesh Babu2 1

Research Scholar,Dept of Computer Science & Engineering, Acharya Nagarjuna University, Guntur,A.P.,India. [email protected] 2

Dept of Computer Science &Engineering, Acharya Nagarjuna University Guntur,A.P.,India [email protected]

Abstract Security is the main concern in maintaining reliable communication in network world. To rely on security, modeling efficient Intrusion Detection Systems (IDSs) is becoming mandatory. These days’ applying data mining techniques in various fields is expanding, among these feature selection procedures have become crucial. So we have applied Euclidean distance for choosing best features from the large set of features. Ranking Score is given to all the features, based on these, the predominant features are selected. This helps in improving classification performance for detecting suspicious activities and reduces the storage space. Substantially Intrusion detection is a classification technique in machine learning context; we used Support Vector Machines (SVMs) for categorizing attacks from normal data. We used KDD cup 99 dataset for conducting experiments. By the results achieved we have proved that this method suits well for detecting intrusive behavior with low false positive rates and good accuracy. Key words: Security, Intrusion Detection System (IDS), Data mining, Euclidean distance, Machine Learning, Support Vector Machines (SVMs).

1. Introduction An intrusion is defined as a sequence of events that is unknown and accidental to the user that compromises the security of a computer system. This can be made from external side or internal side of the system [1]. Intrusion Detection System (IDS) is a scheme of detecting intrusions. It has to act like watchdog in maintaining security from unauthorized network accesses, avoiding malicious attacks and so on [2]. Mainly there are two categories of intrusion detection systems: host-based and network-based. The first one is meant for examining internal data within a computer system while the other one deals with data transmitting between computer systems [3]. Two different methods of data mining techniques are applied in intrusion detection system namely classification and clustering. Classification is done aiming at predicting the intrusion activity based on behavior activities, prediction is more significant. Classification is a machine learning technique for categorizing unseen data into one of several predefined classes based on a training dataset. Clustering is the one in which the classes are not defined at the stage of learning, and the learning stage is to decide the classes in the database. If the idea of intrusion detection system is to differentiate an abnormal data from a normal data, classification is more appropriate to do this task. If the purpose of system is intended for identifying the type of attack or malicious action, clustering suits well [4]. Feature selection or attribute selection is considered to be selecting the best subset from the original features, mainly to avoid redundant and irrelevant features. There are plenty of reasons for going with feature selection: better classification results and decrease in training time, growing generalization performance and good model interpretability. Even though we are using best classifiers for classification process there is no guarantee in getting important features which contribute to classification. Therefore P Indira priyadarsini,IJRIT

426

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

feature selection when combined with best classifiers we can achieve classification with more perfectness [5] [6]. Generally there are two kinds of feature selection approaches. They are 1) filters which are used by ranking the features on the basis of statistical properties and feature subsets independently of the classifier, 2) wrappers which allow adding features one at a time until no further improvement can be obtained. It uses heuristic search like hill climbing for identifying subset of features. And they use a classifier to assess features or feature subsets, like F-score, Random forest; etc. Here in this study we used a simple distance metric Euclidean Distance as a measure of choosing outstanding features from the large set of features. Then we applied Support Vector Machines (SVMs) which are better and easier to understand learning machines for classification process. The paper is organized as follows: In Section 2, historical work on feature selection in the field of IDS is given, followed by An Overview of IDS precisely with the types of attacks occurred in IDS and description of the KDD cup 99 dataset in Section 3.Then the Euclidean Distance and SVM are discussed in the Section 4. In Section 5, the proposed method of feature selection is given. Then in the Section 6 Experiments conducted are given with summary of results obtained. In the last section conclusions are given followed by future work.

2. Related work As the growing research on data mining techniques has increased, feature selection has been used as an indispensable pre-processing step in intrusion detection. It yields reduction of the number of appropriate traffic features without negative effect on classification accurateness. This leads to a great improvement in the efficiency of IDS [15]. In other related work using Classification and Regression Trees (CART) and Bayesian Networks (BN) the authors have given ensemble feature selection algorithms which yielded in lightweight IDS [16]. In the earlier work Sung and Mukkamala used the ranking technique to select significant features for intrusion detection. As one input feature is deleted from the dataset at a time and the final dataset are then used for the training and testing of the Support Vector Machine (SVM).Then the classifier performance is compared with the original features (all attributes) in terms of performance criteria. Here rank the importance of the features according to the rules framed using fuzzy logic [17]. Then in another related work, selecting most relevant features is done using Generalized Discriminant Analysis (GDA) as a feature selection technique with SVM as a classifier achieved best results [18]. Mainly the authors in [19] stated that a feature selection algorithm, need not be included in the model directly, it is always run before the very intrusion detection process starts. More recently Iftikhar et. al in search of optimal feature selection strategy has used genetic algorithm (GA) and Principal Component Analysis (PCA) for selecting genetic principal components that offers a subset of features with highest sensitivity and the optimal discriminatory power [20].

3. An Overview of IDS During the year 1998 DARPA intrusion detection evaluation program [14], has been set up a simulating typical U.S. Air Force LAN to acquire raw TCP/IP dump data for a network. At that period it was blasted with multiple attacks.DARPA’98 have taken 7 weeks of network traffic, which is processed into about 5 million connection records, each with about 100 bytes. Then the 10% of network traffic is taken as KDD Cup 99 dataset which is stated as benchmark dataset in the area of IDS.

3.1 Kdd Cup 99 Dataset The Knowledge Discovery and Data Mining (KDD) Cup 99 dataset [13] is mainly used in IDS technology by the researchers. It was taken from the Third International Knowledge Discovery and Data Mining Tools P Indira priyadarsini,IJRIT

427

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

Competition. Each connection record constitutes 41 attributes which are of both continuous and discrete type variables. There are 22 categories of attacks from the following four classes: Denial of Service (DOS), Root to Local (R2L), User to Root (U2R), and Probe. The dataset holds 391458 DOS attack records, 97278 normal records, 4107 Probe attack records, 1126 R2L attack records and 52 U2R attack records [8].

3.2 IDS Attacks



Among the network traffic, there are four basic types of attacks. Each class in the KDD Cup 99 dataset specifies a type of a simulated attack, and the attacks are Denial of Service (DOS), User to Root (U2R), Remote to Local Attack (R2L), and Probing Attack as mentioned earlier section 3.1. Denial of Service (DOS): It is an attack in which the attacker makes some computing or memory resource too busy or too full to handle legitimate requests by sending some malicious packets.



User to Root (U2R): It is an attack in which the attacker attempts to gain access to the root account of the target system and is able to develop some vulnerability to gain access to the root account.



Remote to Local Attack (R2L): It is an attack in which the attacker who does not have any account on that target machine, make use of some flaw and tries to gain the access of that target machine.



Probing Attack: It is a type of attack in which malicious attacker attempt to gather information about a network of computers for the observable purpose of circumventing its security control. The Table I shown below gives the clear description of the 22 types of attacks used in the dataset. TABLE I: Details of Attacks Type of Attack DOS

Attack Pattern back,land,Neptune,pod, smurf, teardrop

U2R

buffer_overflow,loadmodule,perl, rootkit ftp_write,guess_passwd,imap,multihop, phf, spy,warezclient, warezmaster ipsweep,nmap,portsweep, satan

R2L Probe

4.1 Euclidean distance Euclidean Distance is the most well known distance metric mainly used as filter approach in the field of data mining. It is also called as Euclidean norm or Euclidean metric. It is mainly based on Pythagorean Theorem from the basic mathematics [7].So it is also referred as Pythagorean metric. The formula is defined as the square root of the sum of the squares of the differences between the consequent coordinates of the two points.The Euclidean distance between two points with n dimensions P = (p1, p2, p3… pn) and Q = (q1, q2, q3… qn) is given as: d (p,q)=√ ( p1- q1)2 +( p2- q2)2 +…..+( pn- qn)2 (1) We compute this distance for every attribute in the dataset. i.e. calculating the distance between each attribute and class label. This is done for all the 41 attributes.

4.2 Support Vector Machines Support Vector Machines (SVMs) are learning machines used for classifying attack and normal data. These are built using support vectors, which are critical points for classification process. The strengths of SVMs are: gives optimal and global solution, good generalization performance and robustness. The definition of SVM is incomplete without Statistical Learning Theory (SLT).These SVMs are introduced by Vapnik & Chervonenkis (VC) [9] [10].These will overcome the curse of dimensionality and over fitting problems.

P Indira priyadarsini,IJRIT

428

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

SVMs mainly concentrate on expanding the margin for larger area through forming Maximum Marginal Hyper plane (MMH).It can be obtained by identifying the support vectors. Identification of support vectors is done based on training the dataset with the linear equation ie.yi=wi xi+b where w is the weight vector, b is the bias vector and xi is the total number of attributes present while yi is the class attribute. So we say that if it is a positive class (yi=+1), then H1:wixi+b>0 if it is a negative class (yi=-1) then H2:wi xi+b<0.The MMH is given with wixi+b=0.The distances between two hyper planes is known as margin. It is given by 2/ǁwǁ. Then to obtain the solution we have to maximize the margin. So maximizing margin with an objective function is same as solving following minimization function. Min. f(w)=ǁwǁ2/2 (2) Solving the above equation require following two constraints: w,b are chosen in such a way that two conditions are met. wxi+b≥ 1 if yi=+1 (3) wxi+b≤ -1 if yi=-1 (4) Both inequalities are simplified as for i=1,2,3,…….N yi(wxi+b ≥1) Then the equation becomes Min. f (w)= ǁwǁ2/2 subject to yi(wxi+b ≥1) for i=1,2,3,…….N (5) This equation (5) can be solved by using langragian multiplier method. Solving these equations yield the values wi,bi and also lagrangian multiplier λi [11].These are very useful in obtaining support vectors, we say if data points whose λi >0 are called support vectors. And therefore we can build the SVMs for classifying data. In building these SVMs an efficient training algorithm called Sequential Minimal Optimization (SMO) is used. It is a traditional algorithm for SVMs.SMO is able to find best solution to the quadratic optimization problem [12].There are many software tools available for training SVMs namely LIBSVM, Weka, SVMLight,MATLAB and so on. In SVM, generalization ability depends on choosing the best parameters. The diagrammatic representation of SVM is shown in the figure 1 given below. H2

1/w H1

Support Vectors

MMH

1/w

Class label y= - 1

Class label y= +1

Fig 1: SVM Interpretation

5. Proposed Work The feature subset selection is the commonly used technique in machine learning and data mining techniques. This technique has gained more popularity in the current research. It mainly improves the model by reducing inappropriate, noisy and redundant features. Finally we can build simpler models and achieve tremendous classification results. Feature selection can be done based on the score given to the features. Then the features with fewer score are removed and are considered less important for classification. Here in the proposed model we have taken Euclidean distance as the metric for selecting the

P Indira priyadarsini,IJRIT

429

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

features. Now for each feature in the KDD Cup 99 dataset and its corresponding class label we have performed Euclidean distance as described in equation (1). There are 41 features in the training dataset, they are defined as F= {F1, F2…F41} and its corresponding class label as C. Let any attribute Fj= {x1j, x2j…xnj} where j is the total number of attributes and n is the number of training instances and C= {c1, c2…cn} where n is the number of instances of training set. Then the Euclidean distance metric becomes dj (Fj, C) =√ (x1j- ci) 2 (6) Calculate the Euclidean distance for each attribute based on the formula given in equation (6).Then we get the corresponding distances for each attribute. Hence a total of 41 attributes have obtained the distance. Based on this they are arranged in the order highest to lowest. Some limit value is set to consider the attributes with highest Euclidean distance as the most promising among others. The features and class label in the dataset are shown in the corresponding Table II for investigating the calculation of the score. TABLE II: Dataset with Vector Data Points in Each Attribute and also Class Label Attr1 x11 x21 x31 ⁞

Attr2 x12 x22 x32 ⁞

Att3 x13 x23 x33 ⁞

….. ….. ….. ….. ⁞

Att41 x141 x241 x341 ⁞

Class c1 c2 c3 ⁞

xn1

xn2

xn3

…..

xn41

cn

Now consider those attributes with highest scored values and build the model using Support Vector Machines (SVMs). Here in this study, we conducted 10 fold cross validation. The dataset is partitioned at random into 10 equal parts in which the classes are taken approximately as equal size as in the full dataset. Each part is kept out in turn and the training is performed on remaining 9 parts, then its testing (error rate) is conducted on holdout set. The training procedure is done in total of 10 times on different training sets and finally the 10 error rates are averaged to obtain overall error estimate. Finally evaluate the results obtained using the standard metrics. The Confusion matrix and Detection Rate (DR) and the False Alarm Rate (FAR) are used. The confusion matrix is given as the data with actual and predicted classifications done by a classifier for individual classes. The detection rate (DR) is defined as the percentage of instances generated by the suspicious programs, which are labeled correctly as abnormal by the classifier. The false positive rate is defined as the percentage of normal records, which are mislabeled as anomalous. The proposed hierarchy of constructing IDS is shown in the following figure 2. Raw KDD Cup 99 dataset

Feature Selection (Using Euclidean distance)

Intrusion Detection using SVMs

Evaluating Results Fig 2: Proposed Hierarchy of Constructing IDS P Indira priyadarsini,IJRIT

430

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

6. Experiments and Results In our Experiments we have taken 10% KDD cup 99 dataset, all the practicable pre-processing techniques are applied. This has been done to reduce the training time, cost and hence results are improved. The resultant dataset contains 14207 instances which is a subset of 10% KDD cup 99 dataset and with no redundant records in each class. In the data set taken, each class volume is taken as proportionate to the relative size of original dataset. So it contains 3000, 10000, 574, 401 and 52 normal, DOS, Probe, R2L and U2R instances respectively. Then we assigned values for five class labels: 5 for normal, 4 for DOS, 3 for Probe, 2 for R2L, 1 for U2R.This is mainly necessary for conducting our feature selection process. For illustration, for the attribute 1, Euclidean distance is calculated as √∑ (Attr1-classlabel)2 .That is given as 1)calculating difference for every vector point in Attr1 and class label and it is squared.2)Then Step (1) is done for all the 14027 instances 3)Sum of 14027 values is taken 4)Then square root of the sum obtained. This is for calculating Euclidean distance for attribute1.Above four steps have to be done for all the 41 attributes. Then for each attribute we get some desired value. These are treated as scores given to the attributes. The scores obtained are shown in the Table III given below. TABLE III: Scores Obtained For 41 Attributes Using Euclidean Distance Attribute no. Att1 Att2 Att3 Att4 Att5 Att6 Att7 Att8 Att9 Att10 Att11 Att12 Att13 Att14 Att15 Att16 Att17 Att18 Att19 Att20 Att21 Att22 Att23 Att24 Att25 Att26 Att27 Att28 Att29 Att30 Att31 Att32 Att33 P Indira priyadarsini,IJRIT

Attribute name

Score obtained

Duration protocol_type Service Flag src_bytes dst_bytes Land wrong_fragment Urgent Hot num_failed_logins logged_in num_compromised root_shell su_attempted num_root num_file_creations num_shells num_access_files num_outbound_cmds is_host_login is_guest_login Count srv_count serror_rate srv_serror_rate rerror_rate srv_rerror_rate same_srv_rate diff_srv_rate srv_diff_host_rate dst_host_count dst_host_srv_count

489.26 272.60 2221.57 701.02 481.16 1299.58 491.75 491.70 491.74 529.10 491.75 405.42 488.44 491.68 491.75 491.45 492.21 491.74 491.29 491.75 491.75 490.73 20414.29 20446.24 491.60 491.58 491.55 491.16 29178.79 490.98 482.45 21435.73 28030.43

431

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

Att34 Att35 Att36 Att37 Att38 Att39 Att40 Att41

dst_host_same_srv_rate dst_host_diff_srv_rate dst_host_same_src_port_rate dst_host_srv_diff_host_rate dst_host_serror_rate dst_host_srv_serror_rate dst_host_rerror_rate dst_host_srv_rerror_rate

383.81 488.48 467.75 489.76 491.53 491.64 491.24 491.08

From the Scores obtained we have put limit value as 490.So we have selected the attributes whose scores are greater than 490.By this we have selected 29 attributes from 41 attributes. The selected attributes are in the order Att3, Att4, Att6, Att7, Att8, Att9, Att10, Att11, Att14, Att15, Att16, Att17, Att18, Att19, Att20, Att21, Att23, Att24, Att25, Att26, Att27, Att28, Att29, Att32, Att33, Att38, Att39, Att40, and Att41. Then consider the 29 attributes and apply SVM classifier. Here we have taken Radial Basis kernel for performing classification. The time taken to construct the model is 823.35 sec. Another experiment using with no feature selection (41 attributes) using SVM is conducted to compare the results obtained by our proposed approach. The time taken to construct this model is 982 sec. The Confusion matrices obtained using no feature selection and using proposed model is given in the Tables IV and V respectively. Table IV: Confusion Matrix Obtained With No Feature Selection Predicted Actual

Normal

DOS

Probe

R2L

U2R

%

Normal DOS Probe R2L U2R %

2806 1885 54 188 19 57

181 7949 69 102 16 96

13 109 446 57 5 70.7

0 48 5 45 7 43

0 9 0 9 5 22

93.5 79.4 78 11 9.6

Table V: Confusion Matrix Obtained For Proposed Model Predicted Actual

Normal

DOS

Probe

R2L

U2R

%

Normal DOS Probe R2L U2R %

2869 898 38 198 11 72

95 9004 47 98 18 97

36 94 486 42 7 64

0 4 3 51 7 75

0 0 0 12 9 42

95.6 90 84.6 13.2 17.3

The comparison of Detection Rate (DR) and False Alarm Rate (FAR) of both the experiments with no feature selection (41 attributes) and proposed model (29 attributes) are given in the following Table VI.

Normal DOS Probe

P Indira priyadarsini,IJRIT

SVM classifier with no feature Selection(41 attributes)

SVM classifier with proposed model(29 attributes)

DR (%)

FAR Rate (%)

DR Rate (%)

FAR Rate (%)

19 11.2 1.36

95.6 90 84.36

10.3 6.4 1.33

93.5 79.4 78

Rate

432

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

11 9.6

R2L U2R

0.44 0.12

13.2 17.3

0.102 0.08

TABLE VI: DR and FAR of both the Models Accordingly from the results shown above we have realistically proved that Detection Rates were improved using our proposed model than using all the 41 attributes with SVM classifier. And also false positive rates were decreased in the proposed model gaining efficient intrusion detection. Through the experiments conducted we have achieved 88% accuracy using the proposed model while 80.2% accuracy using no feature selection method.

7. Conclusion In the work done we are able to show that Euclidean distance as a metric for feature selection has achieved good results when compared with no feature selection using Support Vector Machine. We have proved that this method suits well for detecting intrusive behavior with low false positive rates and good accuracy. Utilizing the rank of the features we are able to select the most promising features and discard less promising ones, we applied SVM since it is good enough with high generalization power. As a future work we will employ any other best classifier which improves performance and produce more accurate results for the detection of attacks. And we go for other techniques for feature selection so that we can build efficient Intrusion Detection Systems.

References [1] M. Sheikhan, et al., "Application of Fuzzy Association Rules-Based Feature Selection and Fuzzy ARTMAP to Intrusion Detection," Majlesi Journal of Electrical Engineering, vol. 5, 2011. [2] R. H. Gong, M. Zulkernine, and P. Abolmaesumi. A Software Implementation of a Genetic Algorithm Based Approach to Network Intrusion Detection. Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Networks (SNPD/SAWN’05). 2005. [3] Stefano Zanero, “Behavioral Intrusion Detection,” in Proceedings of ISCIS 2004, volume 3280 of Lecture Notes in Computer Science, pages 657-66, Kemer-Antalya, Turkey, October 2004.Springer. [4] L. Han, "Using a Dynamic K-means Algorithm to Detect Anomaly Activities," 2011, pp. 1049-1052. [5] Srilatha Chebrolu, Ajith Abraham and Johnson P Thomas,” Hybrid Feature Selection for Modeling Intrusion Detection Systems”, Neural Information Processing Lecture Notes in Computer Science Volume 3316, 2004, pp 1020-102. [6] ] Li J. P., Chen, Z. Y., Wei, L. W., Xu W. X., Kou G., “Feature selection via least squares support feature machine”, International Journal of Information Technology and Decision Making 6(4),pp. 671–686, 2007. [7] http://www.econ.upf.edu/~michael/stanford/maeb4.pdf [8] Andrew Sung,S Mukkamala.,”Feature Selection for Intrusion Detection using Neural Networks and Support Vector Machines”Transportation Research Record:Journal of the Transportation Research Board 1822.1,2003,pp.33-39 [9] Boser, Guyon, and Vapnik, “A training algorithm for optimal margin classifiers”,Proceedings of the fifth annual workshop on Computational learning theory.pp.144-152, 1992. [10] Cortes C.,Vapnik V.,“Support vector networks, in Proceedings of Machine Learning 20: pp.273–297, 1995. [11] P Indira priyadarsini,Nagaraju Devarakonda,I Ramesh Babu,”A Chock-Full Survey on Support Vector Machines”, International Journal of Computer Science and Software Engineering,Vol 3,issue10,2013. [12] Platt J., “Fast training of support vector machines using sequential minimal optimization”, in Schölkopf, B.; Burges, C. J. C.; Smola, A. J. (Eds.). Advances in Kernel Methods Support Vector Learning.Cambridge, MA: MIT Press,pp. 185–208,1999. [13] http://kdd.ics.uci.edu/databases/kddcup99/task.html [14]http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/data/index.html

P Indira priyadarsini,IJRIT

433

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 5, May 2014, Pg: 426-434

[15] Rupali Datti, Bhupendra verma,”Feature Reduction for Intrusion Detection Using Linear Discriminant Analysis”, (IJCSE) International Journal on Computer Science and Engineering Vol 02, No. 04, 2010, 1072-1078 [16] Srilatha Chebrolu, Ajith Abraham, and Johnson P. Thomas”Hybrid Feature Selection for Modeling Intrusion Detection Systems “Springer, 2004,pp 1020-1025. [17] Sung, A.H., Mukkamala, “Feature Selection for Intrusion Detection using Neural Networks and Support Vector Machines”, Journal of the Transportation Research Board, 2003. [18] P Indira priyadarsini,I Ramesh Babu,”Modeling Intrusion Detection System based on Generalized Discriminant Analysis and Support Vector Machines”,International Conference on Recent Trends in Engineering and Technology Sciences-2014,pp 8-12. [19] Gu, G., Fogla, P., Dagon, D., Lee, W., Skoric, B,”Towards an Information-Theoretic Framework for Analyzing Intrusion Detection Systems”In: Gollmann, D., Meier, J., Sabelfeld, A. (eds.) ESORICS 2006. LNCS, vol. 4189, pp. 527–546.Springer, Heidelberg (2006) [20]Iftikhar Ahmad, Muhammad Hussain,Abdullah Alghamdi,”Enhancing SVM performance in intrusion detection using optimal feature subset selection based on genetic principal com ponents”,Springer,2013.

P Indira priyadarsini,IJRIT

434

Feature Selection for Intrusion Detection System using ...

Key words: Security, Intrusion Detection System (IDS), Data mining, Euclidean distance, Machine Learning, Support ... As the growing research on data mining techniques has increased, feature selection has been used as an ..... [4] L. Han, "Using a Dynamic K-means Algorithm to Detect Anomaly Activities," 2011, pp.

126KB Sizes 2 Downloads 368 Views

Recommend Documents

Intrusion Detection: Detecting Masquerade Attacks Using UNIX ...
While the majority of present intrusion detection system approaches can handle ..... In International Conference on Dependable Systems and Networks (DSN-. 02), 2002 ... Sundaram, A. An Introduction to Intrusion Detection [online]. URL:.

Unsupervised Feature Selection for Outlier Detection by ...
v of feature f are represented by a three-dimensional tuple. VC = (f,δ(·),η(·, ·)) , ..... DSFS 2, ENFW, FPOF and MarP are implemented in JAVA in WEKA [29].

Revealing Method for the Intrusion Detection System
Detection System. M.Sadiq Ali Khan. Abstract—The goal of an Intrusion Detection is inadequate to detect errors and unusual activity on a network or on the hosts belonging to a local network .... present in both Windows and Unix operating systems. A

Intelligent Mobile Agent for Intrusion Detection System - CiteSeerX
Therefore, JAVA language will be chosen for its .... the same time, a language and a knowledge base, also called .... php?action view=submenu&option=tree&id.

Intelligent Mobile Agent for Intrusion Detection System - CiteSeerX
a finished intelligent tool prototype for intrusion detection. Intrusion ..... They receive alerts from correlator agents, analyse ... Monitoring and Surveillance.

Host Based Intrusion Detection and Countermeasure Selection in Cloud
Particularly, intruders can exploit vulnerability to a cloud system and compromise virtual machines to deploy further large scale types of attack like distributed ...

Unsupervised Feature Selection Using Nonnegative ...
trix A, ai means the i-th row vector of A, Aij denotes the. (i, j)-th entry of A, ∥A∥F is ..... 4http://www.cs.nyu.edu/∼roweis/data.html. Table 1: Dataset Description.

An Extensive Intrusion Detection System Incorporating ...
tools, methods and resources to help identify, assess and report ... Also, according to www.wikipedia.com, an intrusion detection .... A large electro-magnet is mounted on the door .... intelligent, distributed java agents and data mining to learn ..

Visualisation for Intrusion Detection
We have chosen to take the access log file of a small personal web server, that has ... of requesting a username–password pair from the originating web browser. .... one parameter choice, the x–y position of the subplot within the trellis plot.

An Extensive Intrusion Detection System Incorporating ...
(IJCSIS) International Journal of Computer Science and Information Security, Vol.1, No.1, May 2009. 67 ... Computer Science and Mathematics Department, Babcock University Ilishan-Remo, Ogun state, Nigeria. Abstract ..... and a sensor positioned at 90

signature based intrusion detection system pdf
signature based intrusion detection system pdf. signature based intrusion detection system pdf. Open. Extract. Open with. Sign In. Main menu. Displaying ...

A Scalable Wireless Intrusion Detection System
1, No. 1, May 2009. 53. A Scalable Wireless Intrusion Detection System. Mouhcine .... legitimate station or an access point to access network services.

Feature Selection for SVMs
в AT&T Research Laboratories, Red Bank, USA. ttt. Royal Holloway .... results have been limited to linear kernels [3, 7] or linear probabilistic models [8]. Our.

Unsupervised Feature Selection for Biomarker ... - Semantic Scholar
Feature selection and weighting do both refer to the process of characterizing the relevance of components in fixed-dimensional ..... not assigned.no ontology.

Unsupervised Feature Selection for Biomarker ...
factor analysis of unlabeled data, has got different limitations: the analytic focus is shifted away from the ..... for predicting high and low fat content, are smoothly shaped, as shown for 10 ..... Machine Learning Research, 5:845–889, 2004. 2.

AMIFS: Adaptive Feature Selection by Using Mutual ...
small as possible, to avoid increasing the computational cost of the learning algorithm as well as the classifier complexity, and in many cases degrading the ...

Unsupervised Feature Selection for Biomarker ...
The proposed framework allows to apply custom data simi- ... Recently developed metabolomic and genomic measuring technologies share the .... iteration number k; by definition S(0) := {}, and by construction |S(k)| = k. D .... 3 Applications.

Host based Attack Detection using System Calls
Apr 3, 2012 - This calls for better host based intrusion detection[1]. ... Intrusion detection is the process of monitoring the events occurring in a ... System Call in Linux ... Rootkits[2] are a set of software tools used by an attacker to gain.

Feature Selection for Ranking
uses evaluation measures or loss functions [4][10] in ranking to measure the importance of ..... meaningful to work out an efficient algorithm that solves the.

1 feature subset selection using a genetic algorithm - Semantic Scholar
Department of Computer Science. 226 Atanaso Hall. Iowa State ...... He holds a B.S. in Computer Science from Sogang University (Seoul, Korea), and an M.S. in ...

Feature Selection using Probabilistic Prediction of ...
selection method for Support Vector Regression (SVR) using its probabilistic ... (fax: +65 67791459; Email: [email protected]; [email protected]).

Efficient Data Mining Algorithms for Intrusion Detection
detection is a data analysis process and can be studied as a problem of classifying data ..... new attacks embedded in a large amount of normal background traffic. ...... Staniford et al propose an advanced method of information decay that is a.

Intrusion Prevention System
Network security, network management, network infrastructure. 1. INTRODUCTION .... NIPS software is disabled to allow the system to act as a wire that does not ...