Improving data transfer rate of Hadoop MapReduce framework using data blocks for massive data
Sujit Roy,  Md. Humaun Kabir,  Ripan Roy,  Md. Zahidul Alam
In this research paper, a new technique has been proposed to process the massive data in Hadoop MapReduce framework to improve data rate by using synchronous data transmission, sending block of data from source to destination. The proposed method shows how to divide the data blocks in an efficient manner for achieving satisfactory data transfer rate by adjusting the split size or using appropriate size of staffs. In traditional system, normally data transfer is accomplished through a small block of 8 bit while in the proposed system data transfer is performed through a block size of 80 byte to 132 byte. Moreover, the traditional system needs to add 3 extra bits with a block of data during data transmission while the proposed system attaches additional 32 byte with a block of data. For this reason, our proposed system takes more time to transfer small size data but it transfer big size data very faster than the current systems. From the simulation results, it is observed that the proposed model is more efficient and provides satisfactory performance for the big size data.
Keywords- MapReduce, Massive Data, Incremental Processing, Hadoop, Distributed Computing, HDFS
Cite this Article
Sujit Roy,  Md. Humaun Kabir,  Ripan Roy,  Md. Zahidul Alam,   "Improving data transfer rate of Hadoop MapReduce framework using data blocks for massive data"
, International Journal of Engineering Development and Research (IJEDR), ISSN:2321-9939, Volume.8, Issue 1, pp.314-320, January 2020, Available at :http://www.ijedr.org/papers/IJEDR2001060.pdf