Абстрактный

Hadoop Based Parallel Framework for Feature Subset Selection in Big Data

Revathi.L, A.Appandiraj

It is the era of Big Data. Since scale of data is increasing every minute, handling massive data becomes important in this era. Massive data poses a great challenge for classification. High dimensionality of modern massive dataset has provided a considerable challenge to clustering approaches. The curse of dimensionality can make clustering very slow, and, second, the existence of many irrelevant features may not allow the identification of the relevant underlying structure in the data. Feature selection is the most important part of the clustering process that involves identifying the set of features of a subset, at which they produce accurate and accordant results with the original set of features. Designing traditional machine learning algorithms and data mining algorithms with Map Reduce Programming is necessary in dealing with massive data sets. Map Reduce is a parallel processing framework for large datasets and Hadoop is its open-source implementation. The objective of this paper is to implement FAST clustering algorithm with Map Reduce programming to remove irrelevant and redundant features. Following preprocessing, cluster based map-reduce feature selection approach is implemented for effective outcome of features

Отказ от ответственности: Этот реферат был переведен с помощью инструментов искусственного интеллекта и еще не прошел проверку или верификацию

Индексировано в

Академические ключи
ResearchBible
CiteFactor
Космос ЕСЛИ
РефСик
Университет Хамдарда
Всемирный каталог научных журналов
научный руководитель
Импакт-фактор Международного инновационного журнала (IIJIF)
Международный институт организованных исследований (I2OR)
Cosmos

Посмотреть больше