شماره مدرك :
4485
شماره راهنما :
4213
پديد آورنده :
نريماني، حامد
عنوان :

بررسي عملكرد كلي برخي كدهاي شبه بهينه

مقطع تحصيلي :
كارشناسي ارشد
گرايش تحصيلي :
مخابرات سيستم
محل تحصيل :
اصفهان: دانشگاه صنعتي اصفهان، دانشكده برق و كامپيوتر
سال دفاع :
1387
صفحه شمار :
يازده، 89، [II]ص: مصور،جدول، نمودار
يادداشت :
ص.ع. به انگليسي وفارسي
استاد راهنما :
محمدعلي خسروي فرد
استاد مشاور :
علي محمد دوست حسيني
توصيفگر ها :
فزونگي , نامساوي كرافت , كدهافمن , كد شانون , كدM , معيار كمترين ميانگين
تاريخ نمايه سازي :
29/2/1388
استاد داور :
جمال الدين گلستاني، حسين سعيدي
دانشكده :
مهندسي برق و كامپيوتر
كد ايرانداك :
ID4213
چكيده فارسي :
به فارسي وانگليسي:قابل رويت درنسخه ديجيتال
چكيده انگليسي :
AbstractOne of the basic problems in source coding is to find a prefix free code for a given source A code is prefix free ifno codeword is a prefix of any other one A sequence of prefix free codewords can be decoded with the minimumpossible delay The prefix free property is useful in applications such as video compression where the encoded datamust be decoded instantaneously It is well known that the codeword lengths of any binary prefix free code satisfythe Kraft inequality Conversely a prefix free code can be constructed with any given set of codelengths satisfyingthe Kraft inequality Another important constraint in encoding a source is minimizing the average codelength of the code This constraintis useful because it helps reduce the consumption of expensive resources such as hard disk space and transmissionbandwidth The Kraft inequality constraint makes the entropy a lower bound on the average codeword length Hencethe redundancy which is defined as the difference between the average codelength and the entropy is usuallyconsidered as a measure for evaluating the performance of a prefix free code An optimal code i e the instantaneous code which has the minimum average codeword length or equivalently theminimum redundancy for an information source can be obtained using the Huffman algorithm However sometimes we may need a faster but suboptimal algorithm for designing a code for instance when the number of thesymbols of a source is very large and the symbol probability vector varies quickly in time or if we intend to encodejust a few output symbols of a source In such cases one may prefer to use a suboptimal code such as fixed lengthcode uniform code M code or Shannon code In this thesis we intend to study the overall performance of these suboptimal codes The redundancy of each code isconsidered as a random variable on the set of all sources with symbols and the mean and the variance of theredundancy of each code is studied through exact formulation or simulation Since all the information sources with symbols carry the same importance in our study it is natural to assume a uniform distribution on the set of allsources Using this assumption the joint and marginal probability distribution functions of symbol probabilitieshave been formulated Then the mean and the variance of the redundancy of each code are formulated as functionsof for fixed length code uniform code M code and Shannon code In the previous works it was shown that More than percent of tuple distributions have entropy greater than For more than percent of tuple distributions the redundancy of the fixed length code is less than For more than percent of tuple distributions the redundancy of the Uniform code is less than In this thesis we have demonstrated that the expected value of the redundancy of M code on the set of monotonesources tends to the redundancy of the Huffman code for average distribution of monotone sources as tends toinfinity Using Gallager bound it can be easily concluded that the asymptotic redundancy of the Huffman code forthe average distribution is less than 0 086 However simulation reveals that this value is approximately 0 0287 Thisshows that the asymptotic mean of the redundancy of the Huffman code is less than 0 0287 Also the mean of the redundancy of the Shannon code is formulated It is shown that it tends to as n tends toinfinity Moreover the variance of the redundancy of the Shannon code tends to 0 as becomes large Hence whenthe number of symbols is large for most of the sources the redundancy is approximately We present an algorithm for generating symbol probability vectors uniformly Using this algorithm the mean andthe variance of the redundancy of the Huffman code are estimated by simulation It is shown via simulation thatthe mean of the redundancy of the Huffman code tends to 0 0287 and its variance tends to 0 Hence when n is large the Huffman redundancy of most of the sources is approximately 0 0287 The maximum of the redundancy of t
استاد راهنما :
محمدعلي خسروي فرد
استاد مشاور :
علي محمد دوست حسيني
استاد داور :
جمال الدين گلستاني، حسين سعيدي
لينک به اين مدرک :

بازگشت