There is more than one correct solution. In the example, we have probabilities (E,0.5);(O,0.125);(S,0.125);(U,0.10);(L,0.05);(N,0.05);(G,0.05) and Huffman encoding is based on a sorting of the letters according to their probabilities. As there are some letters with the same probabilities, you can sort differently and this way you get different codes. Their average number of bits per symbol should however be the same, even though for certain symbols you will have different number of bits. For example, for L,N,G two will have five bits and one of them just four, which one does not matter for the average number of bits per symbol, but of course for the particular encoding it matters.