摘 要: Hadoop存储海量小文件将导致存储和计算性能显著下降。本文通过分析HDFS架构提出了一种基于文件 类型的小文件合并方法,即根据文件类型将相同类型的小文件合并为大文件,并建立小文件到合并文件的索引关系,索 引关系存储于HashMap中。为了进一步提高文件读取速度,建立了基于HashMap的缓存机制。实验表明该方法能显著 提高HDFS在存储和读取海量小文件时的整体性能。 |
关键词: HDSF;HashMap;索引;合并;缓存 |
中图分类号: TP3-0
文献标识码: A
|
|
Type-based Small File Merging Method on Big Data Platform |
QIN Jiawei, LIU Hui, FANG Muyun
|
( School of Computer Science and Technology, Anhui University of Technology, Ma 'anshan 243002, China )
738437340@qq.com; liuhui@ahut.edu.cn; fangmy@ahut.edu.cn
|
Abstract: Storage of large numbers of small files by Hadoop will lead to inefficiency in storage and computing performance. This paper proposes a small le merging method based on le type by analyzing the framework of HDFS (Hadoop Distributed File System), that is to say, small files of the same type are merged into large ones, and an index relationship of small les to the merged les is established. The index relationship is stored in HashMap. In order to further improve the le reading speed, a cache mechanism based on HashMap is established. Experiments show that this method signi cantly improves the overall performance of HDFS when storing and reading massive small les. |
Keywords: HDSF; HashMap; index; merge; cache |