IamFool
Member | Редактировать | Профиль | Сообщение | Цитировать | Сообщить модератору Bulat_Ziganshin Спасибо, интересно и понятно Цитата: в режиме макс. сжатия он использует CM, так что сравнивать надо с lpaq/ccm | Не, я про -cO говорил, там всё ещё LZMA, однако FreeArc он, кхм, делает... А вот, кстати, ещё вопрос - Цитата: Alg: compression algorithm, referring to the method of parsing the input into symbols (strings, bytes, or bits) and estimating their probabilities (modeling) for choosing code lengths. Symbols may be arithmetic coded (fractional bit length for best compression), Huffman coded (bit aligned for speed), or byte aligned as a preprocessing step. Dict (Dictionary). Symbols are words, coded as 1 or 2 bytes, usually as a preprocessing step. LZ (Lempel Ziv). Symbols are strings. LZ77: repeated strings are coded by offset and length of previous occurrence. LZW (LZ Welch): repeats are coded as indexes into a dynamically built dictionary. ROLZ (Reduced Offset LZ): LZW with multiple small dictionaries selected by context. LZP (LZ predictive): ROLZ with a dictionary size of 1. on (Order-n, e.g. o0, o1, o2...): symbols are bytes, modeled by frequency distribution in context of last n bytes. PPM (Prediction by Partial Match): order-n, modeled in longest context matched, but dropping to lower orders for byte counts of 0. SR (Symbol Ranking): order-n, modeled by time since last seen. BWT (Burrows Wheeler Transform): bytes are sorted by context, then modeled by order-0 SR. DMC (Dynamic Markov Coding): bits modeled by PPM. CM (Context Mixing): bits, modeled by combining predictions of independent models. Some compressors combine multiple steps such as Dict+PPM or LZP+DMC. I indicate the last stage before coding. | (Это из Large Text Compression Benchmark Мэтта Мэхони) Возможности придумать новые алгоритмы на данный момент развития уже исчезла, да? |