©2018 by macnabbs. Proudly created with Wix.com

 
  • Alibek Jakupov

Information Theory: Step 4 Huffman decoder

Updated: May 6, 2019




In previous article we implemented Huffman algorithm.


Here is a brief summary of the previous article

Given (Input)

  • A set of symbols and their weights (usually proportional to probabilities).

  • Alphabet A={a1,a2,…,an}, which is the symbol alphabet of size n.

  • Set W={w1,w2,…,wn}, which is the set of the (positive) symbol weights (proportional or equal to probabilities), i.e. wi = weight (ai), 1 <= i <= n.

Find (Output)

  • A binary code (a set of codewords) with minimum expected codeword length.

  • Code C(A,W)={c1,c2,…,cn}, which is the set of (binary) codewords, where ci is the codeword for ai, 1 <= i <= n.



A prefix code is a uniquely decodable code: a receiver can identify each word without requiring a special marker between words.

Thus it makes sense to add a decoder for our algorithm. The logic is quite simple : we construct a dictionary and then use this dictionary to decode our message.


And as usual, git hub link

4 views