dc.description.abstract | The aim of data compression is to reduce the file size before storing or transferring it in media storage. Huffman and Shannon-Fano are two algorithms used to data compression process in this paper. Data compression with these algorithms is used to compression of text file. Basically, these algorithms have the same way. Starting from sorting the character of a source which associates with its frequency, then build a binary tree and deriving the binary code from the binary tree. At Huffman algorithm, a binary tree is builded from the leaves up to the root and called bottom-up approach. On the contrary, in Shannon-Fano algorithm, a binary tree is builded from the root down to the leaves and called top-down approach.
These algorithms are implemented by using Visual Basic 6.0 to compare the compression algorithms. The comparison is used in the case of ratio of compression and compression speed the text file of result of compression. The text file is tested upon 16 type of text file by various sizes. It can be concluded that in the Huffman algorithm yield the best file ratio compression (61,3%) than Shannon-Fano algorithm (76,9%). However, Shannon-Fano algorithm requires the brief compression time (its compression speed 157,7 KBytes/s) than Huffman algorithm (its compression speed is 154,8 KBytes/s). There are several text files that are not suitable to be compressed by Shannon-Fano algorithm, because the compressed file becomes bigger in size. | en_US |