Exploring the applications and methods of processing and storing massive data in the Cloudy environment

Publish Year: 1398
نوع سند: مقاله کنفرانسی
زبان: English
View: 453

This Paper With 9 Page And PDF Format Ready To Download

  • Certificate
  • من نویسنده این مقاله هستم

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این Paper:

شناسه ملی سند علمی:

GERMANCONF03_188

تاریخ نمایه سازی: 12 شهریور 1399

Abstract:

Today, with the growing number of tools such as social networking and the emergence of concepts such as the semantic web, the volume of data and processing in the large hub system has grown tremendously. For example, a search engine prepares user searches in a fraction of a second resulting from an efficient analysis of massive information collected from the web. Therefore, a mechanism for processing massive information at an affordable cost is very important. Also one of the most widely used cloud computing aspects is the processing of large data sets. The Hadoop Open Source Framework is a cloud-based storage for processing and processing of this type of massive data provided by Apache, which is also more economical due to its open source. Our aim in this study is to examine the advantages and disadvantages of two important parts of the Hadoop distributed file system designed for massive data management and the reducer mapping service that provides a framework for processing massive data in a distributed environment

Authors

Payman kalani torbaghan

Department of Computer Engineering, Neyshabour Branch, Islamic Azad University, Neyshabour, Iran

maryam kheirabadi

Department of Computer Engineering ,Neyshabour Branch, Islamic Azad University, Neyshabour , Iran

Reza Ghaemi

Department of Computer Engineering, Quchan Branch, Islamic Azad University, Quchan,iran