American Journal of Circuits, Systems and Signal Processing, Vol. 1, No. 3, August 2015 Publish Date: Aug. 24, 2015 Pages: 125-132

Using SVM in Prediction of Sequential Data

Mohammad Fouladgar,Zahra Rezaei*

Department of Computer Engineering, Nourabad Mamasani Branch, Islamic Azad University, Nourabad Mamasani, Iran

Abstract

Usage of recognition systems has found many applications in almost all fields. However, Most of classification algorithms have obtained good performance for specific problems; they have not enough robustness for other problems. Therefore, recent researches are directed to the combinational methods which have more power, robustness, resistance, accuracy and generality. Combination of Multiple Classifiers (CMC) can be considered as a general solution method for pattern recognition problems. The ensemble created by proposed method may not always outperforms all classifiers existing in it, it is always possesses the diversity needed for creation of ensemble, and consequently it always outperforms the simple classifier.

Keywords

Classifier Ensemble, Combination of Multiple Classifiers, Support Vector Machine


1. Introduction

Usage of recognition systems has found many applications in almost all fields. However, Most of classification algorithms have obtained good performance for specific problems; they have not enough robustness for other problems. Therefore, recent researches are directed to the combinational methods which have more power, robustness, resistance, accuracy and generality. Combination of Multiple Classifiers (CMC) can be considered as a general solution method for pattern recognition problems. Inputs of CMC are results of separate classifiers and output of CMC is their combined decisions according to [1] and [2].

These methods train multiple base classifiers and then combine their predictions. Since the generalization ability of an ensemble could be significantly better than a single classifier, combinational methods have been a hot topic during the past years [2], [3], [81]-[123]. It was established firmly as a practical and effective solution for difficult problems [4]. It appeared under numerous names: hybrid methods, decision combination, multiple experts, mixture of experts, classifier ensembles, cooperative agents, opinion pool, decision forest, classifier fusion, combinational systems and so on. Combinational methods usually result in the improvement of classification, because classifiers with different features and methodologies can complete each other [4]-[6]. Kuncheva in [7] using Condorcet Jury theorem [8], has shown that combination of classifiers can usually operate better than single classifier provided that its components are independent. It means if more diverse classifiers are used in the ensemble, then error of them can considerably be reduced. In general, theoretical and empirical works showed that a good ensemble is one where the individual classifiers have both accuracy and diversity. In other words, the individual classifiers make their errors on difference parts of the input space [9], [10]. Many approaches have been proposed to construct such ensembles. One group of these methods obtains diverse individuals by training accurate classifiers on different training set, such as bagging, boosting, cross validation and using artificial training examples [10]-[13]. Another group of these methods adopts different topologies, initial weigh setting, parameter setting and training algorithm to obtain individuals. For example, Rosen in [14], [43]-[64] adjusted the training algorithm of the network by introducing a penalty term to encourage individual networks to be decorrelated. For more convergence on ensemble method readers are referred to [7] and [15].

In section 2 we will briefly overview combining classifier levels. We will try in section 3 to obtain really independent and diverse classifiers using manipulation of data set. And finally in section 4 we will conclude.

2. Combining Classifiers

In general, creation of combinational classifiers may be in four steps [7]. It means combining of classifiers may happen in four levels. Bagging and boosting are examples of this method [11], [16], [23]-[42]. Some other methods create independent classifiers trained on manipulated data by relabeling data [17]. In these examples, we use different subset of data instead of all data for training. In step three, we use subset of features for obtaining diversity in ensemble. In this method, each classifier is trained on different subset of features [15], [18]-[19], [65]-[80]. In step two, we can use different kind of classifiers for creating the ensemble [15]. Finally, in the step one, method of combining (fusion) is considered.

In the combining of classifiers, we intend to increase the performance of classification. There are several ways for combining classifiers. The simplest way is to find best classifier. Then we use it as main classifier. This method is offline CMC. Another method that is named online CMC uses all classifier in ensemble. For example, this work is done using voting. We also use from voting majority method in this paper.

3. Proposed Method

Due to the robustness of the ensemble methods, it has found many usages in different applications. Here we first obtain an ensemble of non-persistent classifiers on training set. Then we combine the outputs those classifiers generate over validation set using simple average method.

Definition: A data point will be defined as an erroneous data point if support difference between the support of its correct class and the one from other possible classes after the correct class is more than a threshold; here we consider this threshold equal to 2%.

This method gets data set as input, and puts it into three partitions: training set, testing set and validation set. Then the data of each class is extracted from the original validation data set. The proposed algorithm assumes that a classifier is first trained on training set, and then this classifier is added to our ensemble. Now using this classifier, we can obtain erroneous data points on validation data set. Using this work we partition validation data points into two classes: erroneous and non-erroneous. At this step, we label validation data points according the two above classes and then using a pairwise classifier we approximate probability of the error occurrence. This pairwise classifier indeed works as an error detector. Next all data, including training, testing and validation are served as input for that classifier, and then their outputs are considered as new features of those data points. At the next step, using linear discriminant analysis (LDA [20]) we reduce the dimensionality of the above new space to that of previous space. We repeat this process in predefined number of iterations. Repeating the above process as many as the predefined number causes to creation of that predefined number of data sets and consequently also that number of classifiers [124]-[155].

It can be said about time order of this algorithm that the method just multiplies a constant multiplicand in the time order of simple algorithm (training a simple classifier). Suppose that the time order of training a simple classifier on a data set with n data points and c classes to be O(f(n,c)), also assume that in the worst case the time order of training pairwise classifier on that data set to be O(g(n,c)) and also m to be the number of max_iteration (or that predefined number). Then the time order of this method is Ω(3*m*f(n,c)). Consequently the time order of the method will be Ω(m*f(n,c)). This shows time order of the algorithm relevant to just a constant factor is reduced, that this waste of time is completely tolerable against important achieved accuracy.

4. Experimental Results

The experiments were performed on three data sets: "Iris", "Wine" and "Bupa". A summary of these data set characteristics is depicted in table 1. Here, the training set, test set and validation set contain 60%, 15% and 25% of entire data set respectively. The pseudo code of the proposed combinational algorithm is following:

Proposed Algorthim(original data set);

validation data, training data, test data = extract (original data set);

fori=1 to number_of_classes

                data_of_class_validation(i)=extract_data_of_each_class(validation data);

end for

for c=1 to max_iteration

                train(classifier, training data, validation set);

                error=computer_error_on_each_class(classifier, validation set);

                fori=1 to number_of_classes

                                if error(i)>error_threshold

                                                data_ erroneous_nonerroneous {i} =

                                                            divide_data_in_erroneous_nonerroneous…

                                                            (data_of_class_validation(i));

                                end if

end for

ensemble=majority_vote(out(1.. max_iteration));

accuracy=compute_accuracy(ensemble);

returnaccuracy,save_classifiers, classifier_erroneous_nonerroneous{1..c};

Table 1. A summary of our data sets characteristics.

  No. of Classes No. of Features No. of Patterns Patterns per class
Wine 3 13 178 59-71-48
Bupa 2 6 345 145-200
Iris 3 4 150 50-50-50

4.1. Data Sets

The "Iris" data set contains 150 samples in 3 classes. Each of classes contains 50 samples. Each class of this data set refers to a type of iris plant. One class is linearly separable from the other two. Each sample has four continuous-valued features. The "Wine" data set contains 178 samples in 3 classes. Classes contain 59, 71 and 48 respectively where each class refers to a type of wine. These data are the results of a chemical analysis of wines grown in the same region but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. And finally the "Bupa" data set contains 345 samples in 2 classes. Classes contain 145 and 200 respectively.  Each data point has six features. In this data set, the first 5 features are all blood tests which are thought to be sensitive to liver disorders that might arise from excessive alcohol consumption.

Strategy Pattern: While making the cache server capable of choosing either caching method, we used Strategy pattern to separate the basic algorithm for replacement of objects and implemented it nicely so it does not depend on the other parts of the program and could be extended easily.

4.2. Results

The predefined number of max_iteration in the algorithm is experimentally considered 3 here. All classifiers used in the ensemble are support vector machines (SVM). Here, the training set, test set and validation set are considered to contain 60%, 15% and 25% of entire data set respectively. The results are reported in table 1-2.

As it is inferred from tables 1 to 2, different iterations has resulted in diverse and usually better accuracies than initial classifier. Of course the ensemble of classifiers is not always better than the best classifier over different iterations, but always it is above the average accuracies and more important is the fact that it almost outperforms initial classifier and anytime it is not worse than the first. Indeed the first classifier (classifier in the iteration 1) is simple classifier that we must compare its results to ensemble results. In these tables each row is one independent run of algorithm, and each column of it is the accuracy obtained using that classifier generated in iteration number corresponds to column number. The ensemble column is the ensemble accuracy of those classifiers generated in iteration number 1-3.

Table 2. A summary of seven independent runs of algorithm over "Iris" data sets.

"Iris" Iteration 1 Iteration 2 Iteration 3 Ensemble
Run 1 0.93333 1 1 1
Run 2 0.9 0.9 0.96667 0.93333
Run 3 0.9 0.86667 0.33333 0.9
Run 4 0.93333 0.93333 0.96667 0.96667
Run 5 0.96667 0.96667 0.8 0.96667
Run 6 0.9 0.93333 0.26667 0.93333
Run 7 0.9222 0.9333 0.7222 0.95

5. Conclusion and Discussion

It was shown that the necessary diversity of an ensemble can be achieved by this algorithm. The method was explained in detail above and the result over some real data set proves the correctness of our claim. Although the ensemble created by proposed method may not always outperforms all classifiers existing in all iterations, it is always possesses the diversity needed for creation of ensemble, and consequently it always outperforms the first or the simple classifier. We also showed that time order of this mechanism is not much more than simple methods. Indeed using manipulation of data set features we inject that diversity in the classifiers, it means this method is a type of generative methods that manipulates data set in another way different with previous methods such as bagging and boosting.

Acknowledgement

This work is partially supported by Data and Text Mining Research group at Computer Research Center of Islamic Sciences (CRCIS), NOOR co. P.O. Box 37185-3857, Qom, Iran.

References

  1. RobertW. Floyd. Assigning meanings to programs. In Symposium on Applied Mathematics, pages 19–32. American Mathematical Society, 1967.
  2. M. D. Ernst, J. Cockrell, W. G. Griswold, D. Notkin, Dynamically discovering likely program invariants to support program evolution, IEEE TSE 27 (2) (2007) 99–123
  3. B. Weiß. Inferring invariants by static analysis in KeY. Diplomarbeit, University of Karlsruhe, March 2007
  4. Neil D. Jones and Flemming Nielson. Abstract interpretation: A semanticsbased tool for program analysis. In S. Abramsky, D. M. Gabbay, and T. S. E. Maibaum, editors, Handbook of Logic in computer Science, volume 4, pages 527–636. Oxford University Press, 1995.
  5. M. Boshernitsan, R. Doong, A. Savoia, From Daikon to Agitator: Lessons and challenges in building a commercial tool for developer testing, ISSTA (2006) 169–179.
  6. S. Hangal, M. S. Lam, Tracking down software bugs using automatic anomaly detection, in: ICSE, 2002, pp. 291–301.
  7. C. Csallner et al. DySy: Dynamic symbolic execution for invariant inference. In Proc. of ICSE, 2008.
  8. Michael D. Ernst, Adam Czeisler, William G. Griswold, and David Notkin. Quickly detecting relevant program invariants. In ICSE, Limerick, Ireland, June 7-9, 2000.
  9. Michael D. Ernst, William G. Griswold, Yoshio Kataoka, and David Notkin. "Dynamically Discovering Program Invariants Involving Collections", Technical Report, University of Washington, 2000.
  10. S. Nadeem and S. Saleem, Theoretical Investigation of MHD Nanofluid Flow Over a Rotating Cone: An Optimal Solutions, Information Sciences Letters, 3(2), 55-62 (2014).
  11. M. Zamoum, M. Kessal,Analysis of cavitating flow through a venture,Scientific Research and Essays,10(11), 383-391 (2015).
  12. H. Morad, GPS Talking For Blind People, Journal of Emerging Technologies in Web Intelligence, 2(3), 239-243 (2010).
  13. D. Rawtani and Y. K. Agrawal, Study the Interaction of DNA with Halloysite Nanotube-Gold Nanoparticle Based Composite, Journal of Bionanoscience, 6, 95-98 (2012).
  14. V. Karthick and K. Ramanathan, Investigation of Peramivir-Resistant R292K Mutation in A (H1N9) Influenza Virus by Molecular DynamicsSimulation Approach, Journal of Bioinformatics and Intelligent Control, 2, 29-33 (2013).
  15. R. Uthayakumar and A. Gowrisankar, Generalized Fractal Dimensions in Image Thresholding Technique, Information Sciences Letters, 3(3), 125-134 (2014).
  16. B. Ould Bilal, D. Nourou, C. M. F Kébé, V. Sambou, P. A. Ndiaye and M. Ndongo, Multi-objective optimization of hybrid PV/wind/diesel/battery systems for decentralized application by minimizing the levelized cost of energy and the CO2 emissions,International Journal of Physical Sciences,10(5), 192-203 (2015).
  17. A. Maqbool, H. U. Dar, M. Ahmad, G. N. Malik, G. Zaffar, S. A. Mir and M. A. Mir, Comparative performance of some bivoltine silkworm (Bombyxmori L.) genotypes during different seasons, Scientific Research and Essays,10(12), 407-410 (2015).
  18. R. B. Little, A theory of the relativistic fermionic spinrevorbital, International Journal of Physical Sciences,10(1), 1-37 (2015).
  19. Z. Chen, F. Wang and Li Zhu, The Effects of Hypoxia on Uptake of Positively Charged Nanoparticles by Tumor Cells, Journal of Bionanoscience, 7, 601-605 (2013).
  20. A.Kaur and V. Gupta, A Survey on Sentiment Analysis and Opinion Mining Techniques, Journal of Emerging Technologies in Web Intelligence, 5(4), 367-371 (2013).
  21. P. Saxena and M. Agarwal, Finite Element Analysis of Convective Flow through Porous Medium with Variable Suction, Information Sciences Letters, 3(3), 97-101 (2014).
  22. J. G. Bruno, Electrophoretic Characterization of DNA Oligonucleotide-PAMAM Dendrimer Covalent and Noncovalent Conjugates, Journal of Bionanoscience, 9, 203-208 (2015).
  23. K. K. Tanaeva, Yu. V. Dobryakova, and V. A. Dubynin, Maternal Behavior: A Novel Experimental Approach and Detailed Statistical Analysis, Journal of Neuroscience and Neuroengineering, 3, 52-61 (2014).
  24. E. Zaitseva and M. Rusin, Healthcare System Representation and Estimation Based on Viewpoint of Reliability Analysis, Journal of Medical Imaging and Health Informatics, 2, 80-86 (2012).
  25. R. Ahirwar, P. Devi and R. Gupta, Seasonal incidence of major insect-pests and their biocontrol agents of soybean crop (Glycine max L. Merrill), Scientific Research and Essays, 10(12), 402-406 (2015).
  26. H. Boussak, H. Chemani and A. Serier, Characterization of porcelain tableware formulation containing bentonite clay,International Journal of Physical Sciences,10(1), 38-45 (2015).
  27. Q. Xiaohong, and Q. Xiaohui, an Evolutionary Particle Swarm Optimizer Based on Fractal Brownian Motion, Journal of Computational Intelligence and Electronic Systems, 1, 138 (2012).
  28. G. Minhas and M. Kumar, LSI Based Relevance Computation for Topical Web Crawler, Journal of Emerging Technologies in Web Intelligence, 5(4), 401-406 (2013).
  29. Y. Shang, Efficient strategies for attack via partial information in scale-free networks, Information Sciences Letters, 1(1), 1-5 (2012).
  30. I. Rathore and J. C. Tarafdar, Perspectives of Biosynthesized Magnesium Nanoparticles in Foliar Application of Wheat Plant,Journal of Bionanoscience, 9, 209-214 (2015).
  31. H. Yan and H. Hu, Research and Realization of ISIC-CDIO Teaching Experimental System Based on RFID Technology ofWeb of Things,Journal of Bionanoscience, 7, 696-702 (2013).
  32. R. Teles, B. Barroso, A. Guimaraes and H. Macedo, Automatic Generation of Human-like Route Descriptions: A Corpus-driven Approach, Journal of Emerging Technologies in Web Intelligence, 5(4), 413-423 (2013).
  33. E. S. Hui, Diffusion Magnetic Resonance Imaging of Ischemic Stroke, Journal of Neuroscience and Neuroengineering, 1, 48-53 (2012).
  34. O. E. Emam, M. El-Araby and M. A. Belal, On Rough Multi-Level Linear Programming Problem, Information Sciences Letters, 4(1), 41-49 (2015).
  35. B. Prasad, D.C. Dimri and L. Bora,Effect of pre-harvest foliar spray of calcium and potassium on fruit quality of Pear cv. Pathernakh, Scientific Research and Essays,10(11), 392-396 (2015).
  36. H. Parvin, H. Alinejad-Rokny and M. Asadi, An Ensemble Based Approach for Feature Selection, Australian Journal of Basic and Applied Sciences, 7(9), 33-43 (2011).
  37. Fouladgar M.H., Minaei-Bidgoli B., Parvin H.: Enriching Dynamically Detected Invariants in the Case of Arrays. International Conference on Computational Science and Its Applications (ICCSA 2011), LNCS, ISSN: 0302-9743. LNCS. Springer, Heidelberg, pp. 622–632, 2011.
  38. H. Parvin, H. Alinejad-Rokny, S. Parvin,Divide and Conquer Classification, Australian Journal of Basic & Applied Sciences, 5(12), 2446-2452 (2011).
  39. H. Parvin, B. Minaei-Bidgoli, H. Alinejad-Rokny, A New Imbalanced Learning and Dictions Tree Method for Breast Cancer Diagnosis, Journal of Bionanoscience, 7(6), 673-678 (2013).
  40. H. Parvin H., H. Alinejad-Rokny, M. Asadi, An Ensemble Based Approach for Feature Selection, Journal of Applied Sciences Research, 7(9), 33-43 (2011).
  41. H. Parvin, H. Helmi, B. Minaie-Bidgoli, H. Alinejad-Rokny, H. Shirgahi, Linkage learning based on differences in local optimums of building blocks with one optima, International Journal of Physical Sciences, 6(14), 3419-3425 (2011).
  42. H. Parvin, B. Minaei-Bidgoli, H. Alinejad-Rokny, S. Ghatei, An innovative combination of particle swarm optimization, learning automaton and great deluge algorithms for dynamic environments, International Journal of Physical Sciences, 6(22), 5121-5127 (2011).
  43. H. Parvin, H. Alinejad-Rokny, S. Parvin,A Classifier Ensemble of Binary Classifier Ensembles, International Journal of Learning Management Systems, 1(2), 37-47 (2013).
  44. H. Parvin, B. Minaei-Bidgoli, H. Alinejad-Rokny, W.F. Punch, Data weighing mechanisms for clustering ensembles, Computers & Electrical Engineering, 39(5): 1433-1450 (2013).
  45. H. Parvin, H. Alinejad-Rokny, B. Minaei-Bidgoli, S. Parvin, A new classifier ensemble methodology based on subspace learning, Journal of Experimental & Theoretical Artificial Intelligence, 25(2), 227-250 (2013).
  46. R. Agrawal, J. Gehrke, D. Gunopulos, P. Raghavan, Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications, In Proceedings of the 1998 ACM SIGMOD international conference on Management of data, (1998) 94-105.
  47. A. Blum, R. Rivest, Training a 3-node neural network is NP-complete, Neural Networks, 5 (1992) 117-127.
  48. J.W. Chang, D.S. Jin, A new cell-based clustering method for large-high dimensional data in data mining applications, In Proceedings of the ACM symposium on Applied computing, (2002) 503-507.
  49. S. Dudoit, J. Fridlyand, Bagging to improve the accuracy of a clustering procedure, Bioinformatics, 19(9) (2003) 1090-1099.
  50. K. Faceli, C.P. Marcilio, D. Souto, Multi-objective Clustering Ensemble, Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS'06), (2006).
  51. A.K. Jain, R.C. Dubes R.C, Algorithms for Clustering Data, Prentice Hall, (1988).
  52. R. Kohavi R G. John, Wrappers for feature subset selection, Artificial Intelligence, 97(1-2) (1997) 273-324.
  53. B. Liu, Y. Xia, P.S. Yu, Clustering through decision tree construction, In Proceedings of the ninth international conference on Information and knowledge management, (2000), 20-29.
  54. R. Miller, Y. Yang, Association rules over interval data, In Proc. ACM SIGMOD International Conf. on Management of Data, (1997) 452-461.
  55. A. Mirzaei, M. Rahmati, M. Ahmadi, A new method for hierarchical clustering combination, Intelligent Data Analysis, 12(6), (2008) 549-571.
  56. C.B.D.J Newman, S. Hettich S, C. Merz, UCI repository of machine learning databases, http://www.ics.uci.edu/˜mlearn/MLSummary.html, (1998).
  57. L. Parsons, E. Haque, H. Liu, Subspace clustering for high dimensional data: a review, ACM SIGKDD Explorations Newsletter, 6(1) (2004) 90-105.
  58. C.M. Procopiuc, M. Jones, P.K. Agarwal P.K, T.M. Murali T.M, A Monte Carlo algorithm for fast projective clustering, In: Proceedings of the ACM SIGMOD conference on management of data, (2002) 418-427.
  59. R. Srikant, R. Agrawal, Mining Quantitative Association Rules in Large Relational Tables, In Proc. of the ACM SIGMOD Conference on Management of Data, Montreal, Canada, (1996).
  60. C.H. Cheng, A.W. Fu, Y. Zhang, Entropy-based subspace clustering for mining numerical data, In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, (1999) 84-93.
  61. C. Domeniconi, M. Al-Razgan, Weighted cluster ensembles: Methods and analysis, TKDD, 2(4) (2009).
  62. C. Domeniconi, D. Gunopulos, S. Ma, B. Yan, M. Al-Razgan, D. Papadopoulos, Locally adaptive metrics for clustering high dimensional data, Data Mining & Knowledge Discovery, 14(1) (2007) 63-97.
  63. A. Strehl, J. Ghosh J, Cluster ensembles-a knowledge reuse framework for combining multiple partitions, Journal of Machine Learning Research, 3 (2002) 583-617.
  64. J. Munkres, Algorithms for the Assignment and Transportation Problems, Journal of the Society for Industrial and Applied Mathematics, 5(1) (1957) 32-38.
  65. Fred, A. and Jain, A. K. (2002). "Data Clustering Using Evidence Accumulation", Proc. of the 16th Intl. Conf. on Pattern Recognition, ICPR02, Quebec City, pp. 276 – 280.
  66. A. Fred, "Finding Consistent Clusters in Data Partitions," Proc. Second Int’l Workshop Multiple Classifier Systems, J. Kittler and F. Roli, eds., pp. 309-318, 2001.
  67. A. Fred and A.K. Jain, "Evidence Accumulation Clustering Based on the k-means Algorithm," Proc. Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR Int’l Workshops SSPR 2002 and SPR 2002, T. Caelli, et al., eds., pp. 442-451, 2002.
  68. Fred A. and Jain A.K. (2005). Combining Multiple Clusterings Using Evidence Accumulation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 27(6):835–850.
  69. Fern X.Z. and Lin W. (2008), Cluster Ensemble Selection, SIAM International Conference on Data Mining, pp. 787-797.
  70. M.H. Fouladgar, B. Minaei-Bidgoli, H. Parvin, H. Alinejad-Rokny, Extension in The Case of Arrays in Daikon like Tools, Advanced Engineering Technology and Application, 2(1), 5-10 (2013).
  71. I. Jamnejad, H. Heidarzadegan, H. Parvin, H. Alinejad-Rokny, Localizing Program Bugs Based on Program Invariant, International Journal of Computing and Digital Systems, 3(2), 141-150 (2014).
  72. H. Parvin, H. Alinejad-Rokny, S. Parvin, H. Shirgahi, A new Conditional Invariant Detection Dramework (CIDF), Scientific Research and Essays, 8(6), 265-273 (2013).
  73. Chang, Y.H. and C.H. Yeh, 2001. Evaluating airline competitiveness using multi attribute decision making. Omega 29: 405-415.
  74. Hu, Y.C. and J.F. Tsai, 2006. Backpropagation multi-layer perceptron for incomplete pairwise comparison matrices in analytic hierarchy process. Applied mathematics and computation 181(1): 53-62.
  75. Hwang, C.L. and K. Yoon, 1981. Multiple attribute decision making.Springer-Verlag, Berlin Heidelberg New York.
  76. Agalgaonkar, A.P., S.V. Kulkarni. and S.A. Khaparde, 2005. Multi-attribute decision making approach for strategic planning of DGs. Power Engineering Society General Meeting 3: 2985-2990.
  77. Byun, H.S. and K.H. Lee, 2006. Determination of the optimal build direction for different rapid prototyping processes using multi criterion decision making. Robotics and Computer-Integrated Manufacturing 22: 69-80.
  78. Kabassi, K. and M. Virvou, 2004. Personalised adult e-training on computer use based on multiple attribute decision making. Interacting with Computers 16: 115-132.
  79. Yang, T., M.C. Chen. and C.C. Hung, 2007. Multiple attribute decision-making methods for the dynamic operator allocation problem. Mathematics and Computers in Simulation 73(5): 285-299.
  80. Geoffrion, M., J.S. Dyer, A. Feinberg, 1972. An interactive approach for multi-criterion optimization, with an application to the operation of an academic department. Management Science 19(4): 357-368.
  81. Sun, A., A. Stam and R.E. Steuer, 1996. Solving multiple objective programming problems using feed-forward artificial neural networks: the interactive FFANN procedure. Management Science 42(6): 835-849.
  82. Wang, J. and B. A. Malakooti, 1992. Feed forward neural network for multiple criteria decision making. Computers & Operations Research 19(2): 151-167.
  83. Li, Q. A. 2008. Fuzzy neural network based Multi-criteria decision making approach for outsourcing supplier evaluation. The 3rd IEEE Conference on Industrial Electronics and Applications 1: 192-196.
  84. Kong, F. and H.  Liu, 2006. Fuzzy RBF neural network model for multiple attribute decision making. The 13th International Conference on Neural Information Processing Part III: 1046-1054.
  85. N. Lalithamani and M. Sabrigiriraj,Dual Encryption Algorithm to Improve Security in Hand Vein and Palm Vein-Based Biometric Recognition, Journal of Medical Imaging and Health Informatics, 5, 545-551 (2015).
  86. M. Khan and R. Jehangir,Fuzzy resolvability modulo fuzzy ideals, International Journal of Physical Sciences, 7(6), 953- 956 (2012).
  87. M. Ravichandran and A.Shanmugam, Amalgamation of Opportunistic Subspace & Estimated Clustering on High Dimensional Data, Australian Journal of Basic and Applied Sciences, 8(3), 88-97, (2014).
  88. M. Zhang, Optimization of Inter-network Bandwidth Resources for Large-Scale Data Transmission, Journal of Networks, 9(3), 689-694 (2014).
  89. O. O. E. Ajibola, O. Ibidapo-Obe, and V. O. S. OIunloyo, A Model for the Management of Gait Syndrome in Huntington's Disease Patient, Journal of Bioinformatics and Intelligent Control, 3, 15-22 (2014).
  90. L. Z. Pei, T. Wei, N. Lin and Z. Y. Cai, Electrochemical Sensing of Histidine Based on the Copper Germanate Nanowires Modified Electrode, Journal of Bionanoscience, 9, 161-165 (2015).
  91. M. K. Elboree, Explicit Analytic Solution for the Nonlinear Evolution Equations using the Simplest Equation Method, Mathematical Sciences Letters, 3(1), 59-63 (2014).
  92. R. Yousef and T. Almarabeh, An enhanced requirements elicitation framework based on business process models, Scientific Research and Essays,10(7), 279-286 (2015).
  93. K. Manimekalai and M.S. Vijaya, Taxonomic Classification of Plant Species Using Support Vector Machine, Journal of Bioinformatics and Intelligent Control, 3, 65-71 (2014).
  94. S. Rajalaxmi and S. Nirmala, Automated Endo Fitting Curve for Initialization of Segmentation Based on Chan Vese Model, Journal of Medical Imaging and Health Informatics, 5, 572-580 (2015).
  95. T. Mahmood and K. Hayat, Characterizations of Hemi-Rings by their Bipolar-Valued Fuzzy h-Ideals, Information Sciences Letters, 4(2), 51-59 (2015).
  96. Agarwal and N. Mittal, Semantic Feature Clustering for Sentiment Analysis of English Reviews, IETE Journal of Research, 60(6), 414-422 (2014).
  97. S. Radharani and M. L.Valarmathi, Content Based Watermarking Techniques using HSV and Fractal Dimension in Transform Domain, Australian Journal of Basic and Applied Sciences, 8(3), 112-119 (2014).
  98. H. W. and W. Wang, an Improved Artificial Bee Colony Algorithm and Its Application on Production Scheduling, Journal of Bioinformatics and Intelligent Control, 3, 153-159 (2014).
  99. L. Gupta,Effect of orientation of lunar apse on earthquakes, International Journal of Physical Sciences, 7(6), 974-981 (2012).
  100. S. Iftikhar, F. Ahmad and K. Fatima, A Semantic Methodology for Customized Healthcare Information Provision, Information Sciences Letters, 1(1), 49-59 (2012).
  101. P. D. Sia, Analytical Nano-Modelling for Neuroscience and Cognitive Science, Journal of Bioinformatics and Intelligent Control, 3, 268-272 (2014).
  102. C. Guler, Production of particleboards from licorice (Glycyrrhizaglabra) and European black pine (PinusNigra Arnold) wood particles,Scientific Research and Essays,10(7), 273-278 (2015).
  103. Z. Chen and J. Hu, Learning Algorithm of Neural Networks on Spherical Cap, Journalof Networks, 10(3), 152-158 (2015).
  104. W. Lu, Parameters of Network Traffic Prediction Model Jointly Optimized by Genetic Algorithm, Journal of Networks, 9(3), 695-702 (2014).
  105. K. Boubaker,An attempt to solve neutron transport equation inside supercritical water nuclear reactors using the Boubaker Polynomials Expansion Scheme, International Journal of Physical Sciences, 7(19), 2730-2734 (2012).
  106. K. Abd-Rabou, Fixed Point Results in G-Metric Space, Mathematical Sciences Letters, 3(3), 141-146 (2014).
  107. Binu and M. Selvi, BFC: Bat Algorithm Based Fuzzy Classifier for Medical Data Classification, Journal of Medical Imaging and Health Informatics, 5, 599-606 (2015).
  108. C. Kamath, Analysis of Electroencephalogram Background Activity in Epileptic Patients and Healthy Subjects Using Dispersion Entropy, Journal of Neuroscience and Neuroengineering, 3, 101-110 (2014).
  109. G. Kaur and E. M. Bharti, Securing Multimedia on Hybrid Architecture with Extended Role-Based Access Control, Journal of Bioinformatics and Intelligent Control, 3, 229-233 (2014).
  110. M. Ramalingam and D. Rana, Impact of Nanotechnology in Induced Pluripotent Stem Cells-driven Tissue Engineering and  Regenerative Medicine, Journal of Bionanoscience, 9, 13-21 (2015).
  111. S. Downes, New Technology Supporting Informal Learning, Journal of Emerging Technologies in Web Intelligence, 2(1), 27-33 (2010).
  112. R. Periyasamy, T. K. Gandhi, S. R. Das, A. C. Ammini and S. Anand, A Screening Computational Tool for Detection of Diabetic Neuropathy and Non-Neuropathy inType-2 Diabetes Subjects,Journal of Medical Imaging and Health Informatics, 2, 222-229 (2012).
  113. Y. Qin, F. Wang and C. Zhou, A Distributed UWB-based Localization System in Underground Mines, Journal of Networks, 10(3), 134-140 (2015).
  114. P. Saxena and C. Ghosh,A review of assessment of benzene, toluene, ethylbenzene and xylene (BTEX) concentration in urban atmosphere of Delhi, International Journal of Physical Sciences,7(6), 850-860 (2012).
  115. J. Hu, Z. Zhou and M. Teng, The Spatiotemporal Variation of Ecological Risk in the Lijiang River Basin Based on Land Use Change, Journal of Bionanoscience, 9, 153-160 (2015).
  116. N. Saleem, M. Ahmad, S. A. Wani, R. Vashnavi and Z. A. Dar,Genotype-environment interaction and stability analysis in Wheat (Triticumaestivum L.) for protein and gluten contents, Scientific Research and Essays,10(7), 260-265 (2015).
  117. R. A. Rashwan and S. M. Saleh, A Coupled Fixed Point Theorem for Three Pairs of w-Compatible Mappings in G-metric spaces, Mathematical Sciences Letters, 3(1), 17-20 (2014).
  118. S. P. Singh and B. K. Konwar, Carbon Nanotube Assisted Drug Delivery of the Anti-Malarial Drug Artemesinin and Its Derivatives-A TheoreticalNanotechnology Approach, Journal of Bionanoscience, 7, 630-636 (2013).
  119. R. Dinasarapu and S. Gupta, Biological Data Integration and Dissemination on Semantic Web-A Perspective,Journal of Bioinformatics and Intelligent Control, 3, 273-277 (2014).
  120. W. Qiaonong, X. Shuang, and W. Suiren, Sparse Regularized Biomedical Image Deconvolution Method Based on Dictionary Learning, Journal of Bionanoscience, 9, 145-152 (2015).
  121. C. Prema and D. Manimegalai, Adaptive Color Image Steganography Using Intra Color Pixel Value Differencing, Australian Journal of Basic and Applied Sciences, 8(3), 161-167 (2014).
  122. R. Adollah, M. Y. Mashor, H. Rosline, and N. H. Harun, Multilevel Thresholding as aSimple Segmentation Technique in Acute Leukemia Images, Journal of Medical Imaging and Health Informatics, 2, 285-288 (2012).
  123. H. Uppili, Proton-Induced Synaptic Transistors: Simulation and Experimental Study, Journal of Neuroscience and Neuroengineering, 3, 117-129 (2014).
  124. Chen, J. and S. Lin, 2003. An interactive neural network-based approach for solving multiple criteria decision-making problems. Decision Support Systems 36: 137-146.
  125. Chen, J. and S. A. Lin, 2004. Neural network approach-decision neural network (DNN) for preference assessment. IEEE Transactions on systems, Man, and Cybernetics-Part C: Applications and reviews 34: 219-225.
  126. H. B. Kekre and T. K. Sarode, Vector Quantized Codebook Optimization Using Modified Genetic Algorithm, IETE Journal of Research, 56(5), 257-264 (2010).
  127. M. Gera, R. Kumar, V. K. Jain, Fabrication of a Pocket Friendly, Reusable Water Purifier Using Silver Nano Embedded Porous Concrete PebblesBased on Green Technology, Journal of Bionanoscience, 8, 10-15 (2014).
  128. M. S. Kumar and S. N. Devi, Sparse Code Shrinkage Based ECG De-Noising in Empirical Mode Decomposition Domain, Journal of Medical Imaging and Health Informatics, 5, 1053-1058 (2015).
  129. C. Zhou, Y. Li, Q. Zhang and B. Wang, An Improved Genetic Algorithm for DNA Motif Discovery with Gibbs Sampling Algorithm, Journal of Bionanoscience, 8, 219-225 (2014).
  130. R. Bhadada and K. L. Sharma, Evaluation and Analysis of Buffer Requirements for Streamed Video Data in Video on Demand Applications, IETE Journal of Research, 56(5), 242-248 (2010).
  131. M. Kurhekar and U. Deshpande, Deterministic Modeling of Biological Systems with Geometry with an Application to Modeling of Intestinal Crypts, Journal of Medical Imaging and Health Informatics, 5, 1116-1120 (2015).
  132. S. Prabhadevi and Dr. A.M. Natarajan, A Comparative Study on Digital Signatures Based on Elliptic Curves in High Speed Ad Hoc Networks, Australian Journal of Basic and Applied Sciences, 8(2), 1-6 (2014).
  133. X. Jin and Y. Wang, Research on Social Network Structure and Public Opinions Dissemination of Micro-blog Based on Complex Network Analysis, Journal of Networks, 8(7), 1543-1550 (2013).
  134. O. G. Avrunin, M. Alkhorayef, H. F. I. Saied, and M. Y. Tymkovych, The Surgical Navigation System with Optical Position Determination Technology and Sources of Errors,Journal of Medical Imaging and Health Informatics, 5, 689-696 (2015).
  135. R. Zhang, Y. Bai, C. Wang and W. Ma, Surfactant-Dispersed Multi-Walled Carbon Nanotubes: Interaction and Antibacterial Activity, Journal of Bionanoscience, 8, 176-182 (2014).
  136. B. K. Singh, Generalized Semi-bent and Partially Bent Boolean Functions, Mathematical Sciences Letters, 3(1), 21-29 (2014).
  137. S. K. Singla and V. Singh, Design of a Microcontroller Based Temperature and Humidity Controller for Infant Incubator, Journal of Medical Imaging and Health Informatics, 5, 704-708 (2015).
  138. N. Barnthip and A. Muakngam, Preparation of Cellulose Acetate Nanofibers Containing Centella Asiatica Extract by Electrospinning Process as the Prototype of Wound-Healing Materials,Journal of Bionanoscience, 8, 313-318 (2014).
  139. R. JacFredo, G. Kavitha and S. Ramakrishnan, Segmentation and Analysis of Corpus Callosum in Autistic MR Brain Images Using Reaction Diffusion Level Sets, Journal of Medical Imaging and Health Informatics, 5, 737-741 (2015).
  140. Wang, B. Zhu, An Improved Algorithm of the Node Localization in Ad Hoc Network, Journal of Networks, 9(3), 549-557 (2014).
  141. T. Buvaneswari and A. A. Iruthayaraj, Secure Discovery Scheme and Minimum Span Verification of Neighbor Locations in Mobile Ad-hoc Networks, Australian Journal of Basic and Applied Sciences, 8(2), 30-36 (2014).
  142. H. Parvin, H. Alinejad-Rokny, N. Seyedaghaee, S. Parvin, A Heuristic Scalable Classifier Ensemble of Binary Classifier Ensembles, Journal of Bioinformatics and Intelligent Control, 1(2), 163-170(2013).
  143. M.H. Fouladgar, B. Minaei-Bidgoli, H. Parvin, H. Alinejad-Rokny, Extension in The Case of Arrays in Daikon like Tools, Advanced Engineering Technology and Application, 2(1), 5-10 (2013).
  144. H. Parvin, M. MirnabiBaboli, H. Alinejad-Rokny, Proposing a Classifier Ensemble Framework Based on Classifier Selection and Decision Tree, Engineering Applications of Artificial Intelligence, 37, 34-42(2015).
  145. Y. Zhang, Z. Wang and Z. Hu, Nonlinear Electroencephalogram Analysis of Neural Mass Model, Journal of Medical Imaging and Health Informatics, 5, 783-788 (2015).
  146. S. Panwar and N. Nain, A Novel Segmentation Methodology for Cursive Handwritten Documents, IETE Journal of Research, 60(6), 432-439 (2014).
  147. H. Mao, On Applications of Matroids in Class-oriented Concept Lattices, Mathematical Sciences Letters, 3(1), 35-41 (2014).
  148. D. Kumar, K. Singh, V. Verma and H. S. Bhatti, Synthesis and Characterization of Carbon Quantum Dots from Orange Juice, Journal of Bionanoscience, 8, 274-279 (2014).
  149. V. Kumutha and S. Palaniammal, Enhanced Validity for Fuzzy Clustering Using Microarray data, Australian Journal of Basic and Applied Sciences, 8(3), 7-15 (2014).
  150. Y. Wang, C. Yang and J. Yu, Visualization Study on Cardiac Mapping: Analysis of Isopotential Map and Isochron Map, Journal of Medical Imaging and Health Informatics, 5, 814-818 (2015).
  151. R. Su, Identification Method of Sports Throwing Force Based on Fuzzy Neural Network, Journal of Networks, 8(7), 1574-1581 (2013).
  152. Matsuda, S. A. 2005. Neural network model for the decision-making process based on AHP. Proceedings of International Joint Conference on Neural Networks, Montreal, Canada.
  153. Kohonen, T. 1987. Self-Organizing and associative memory, 2end edition. Berlin:Springer-Verlag.
  154. Haykin S. 1999. Neural networks: a comprehensive foundation. prentice hall.
  155. Milan, J. and R. Aura, 2002. An application of the multiple criteria decision making analysis to the selection of a new hub airport. EJTIR 2(2): 113-141.

600 ATLANTIC AVE, BOSTON,
MA 02210, USA
+001-6179630233
AIS is an academia-oriented and non-commercial institute aiming at providing users with a way to quickly and easily get the academic and scientific information.
Copyright © 2014 - 2016 American Institute of Science except certain content provided by third parties.