RLE 压缩算法是一个很简单的压缩字符串的算法,不知道的自己百度一下哈。前一段时间有人考我,让我快速实现它,于是我写了一个非常naive的函数。debug了之后它长的这个样子:
def rle_compress(input_str):
""" RLE Compression
>>> rle_compress('get uuuuuuuuuup'):
'1g1e1t1 10u1p'
"""
if len(input_str) == 0:
return ''
if len(input_str) == 1:
return '1' + input_str
tmp, counter, output = input_str[0], 0, ''
for i in xrange(len(input_str)):
if input_str[i] == tmp:
counter += 1
elif counter > 0:
output += (str(counter) + tmp)
tmp = input_str[i]
counter = 1
output += (str(counter) + tmp)
return output
""" RLE Compression
>>> rle_compress('get uuuuuuuuuup'):
'1g1e1t1 10u1p'
"""
if len(input_str) == 0:
return ''
if len(input_str) == 1:
return '1' + input_str
tmp, counter, output = input_str[0], 0, ''
for i in xrange(len(input_str)):
if input_str[i] == tmp:
counter += 1
elif counter > 0:
output += (str(counter) + tmp)
tmp = input_str[i]
counter = 1
output += (str(counter) + tmp)
return output
其实这个真的不够Pythonic哦。Pythonic的方式应该是这样的,灰常nb的one-liner:
from itertools import groupby
def rle_compress_iter(input_str):
""" RLE Compression, using *groupby* from itertools
>>> rle_compress('get uuuuuuuuuup'):
'1g1e1t1 10u1p'
"""
return ''.join([str(len(list(group))) + name for name, group in
groupby(input_str)])
def rle_compress_iter(input_str):
""" RLE Compression, using *groupby* from itertools
>>> rle_compress('get uuuuuuuuuup'):
'1g1e1t1 10u1p'
"""
return ''.join([str(len(list(group))) + name for name, group in
groupby(input_str)])
当然,我是写不出这么强大的函数的,思路来自于《Expert Python Programming》一书。(打个广告,这本书非常好,您得备一本!)
问题来了,虽然使用python的iterator/generator绝对是每一个python程序员应该实践的best practice,但这两个函数的性能差距还真是有点大(反复执行1000次的总用时,以下计时皆是):
manual compress: 0.663447s
original calls: 2.723927s
original calls: 2.723927s
测试用的文本是wikipedia上关于Issac Assimov的一段介绍(为了能让RLE算法有点成就感,我自己往里面加了一些重复的字符):
TEXT = """
The Foooooooooooooooooooooooooundation Series is a science fiction series by Isaac Asimov which covers a
span of about 500 years. It ccccccccccconsists of seven volumes that are closely linked to
each other, alttttttttttthough they can be read separately. The term "Foundation Series"
is often used more generaaally to include the Robot Series and Empire Series,
which are set in the same fictional universe, but in earlier time periods. In
total, there are fifteen nnnnnnnnnnnnnnnnnovels and dozens of short stories written by Asimov,
and six novels written by other authors after his death, expanding the
timeeeeeeeeeeeeeeeeee
spanned by more than nnnnnnnnnnnnnnnnnnnnnnntwenty thousand years. The series is highly acclaimed,
winninggggggggggggggggggggggggg the one-time Hugo Award for "Best All-Time Series" in 1966.[1]
"""
The Foooooooooooooooooooooooooundation Series is a science fiction series by Isaac Asimov which covers a
span of about 500 years. It ccccccccccconsists of seven volumes that are closely linked to
each other, alttttttttttthough they can be read separately. The term "Foundation Series"
is often used more generaaally to include the Robot Series and Empire Series,
which are set in the same fictional universe, but in earlier time periods. In
total, there are fifteen nnnnnnnnnnnnnnnnnovels and dozens of short stories written by Asimov,
and six novels written by other authors after his death, expanding the
timeeeeeeeeeeeeeeeeee
spanned by more than nnnnnnnnnnnnnnnnnnnnnnntwenty thousand years. The series is highly acclaimed,
winninggggggggggggggggggggggggg the one-time Hugo Award for "Best All-Time Series" in 1966.[1]
"""
我XX,怎么会这样……
好吧,也许不是itertools.groupby的错,我写这个函数的时候只是希望尽快的能够通过doctest。让我们来优化一下吧。
第一次尝试:
用加号来连接name和group的长度?用 '%d%s' % (....) 如何?
return ''.join(['%d%s' % (len(list(group)), name) for name, group
in groupby(input_str)])
in groupby(input_str)])
事实证明,结果很悲惨:
strformat calls: 3.933721s
第二次尝试:
group本身也是一个iterator,用 len(list(group)) 是完全没有优化过的第一想法,应该可以有更好的办法来求出一个iterator的长度,比如这样:
def rle_compress_iter_leniter(input_str):
def leniter(iterator):
l = 0
for _ in iterator:
l += 1
return l
return ''.join([str(leniter(group)) + name for name, group in groupby(input_str)])
def leniter(iterator):
l = 0
for _ in iterator:
l += 1
return l
return ''.join([str(leniter(group)) + name for name, group in groupby(input_str)])
哈,虽然这样看起来很丑,但是进步不少哦:
leniter calls: 0.928560s
第三次尝试:
说过了,上一种方法太丑了,我想让它看起来更优雅一点,用 list comprehension 吧:、
return ''.join([str(sum([1 for _ in group])) + name for name, group
in groupby(input_str)])
in groupby(input_str)])
用到了内置的sum函数来求出iterator的长度,但是看起来效果不是很好:
strsum calls: 1.262965s
第四次尝试:
再看一次代码,发现用sum函数求和其实没必要,这个list的长度就等于iterator的长度,所以用len来替换sum也许会更好一些:
return ''.join([str(len([1 for _ in group])) + name for name,
group in groupby(input_str)]
group in groupby(input_str)]
结果是的确改进了一些,但是还是没有leniter函数快:
strlen calls: 1.059268s
第五次尝试:
用str()来把整数转化为字符串也许还可以改进,要把一个对象转化为字符串,我们还可以用repr()来试试:
return ''.join([repr(len([1 for _ in group])) + name for name,
group in groupby(input_str)])
group in groupby(input_str)])
OK,这次真的有提高哦,真的很高哦,超过了leniter:
repr calls: 0.840248s
第六次尝试:
上上个星期网易的大神沈崴来金山帮我们对一个程序进行了一番改造。他的代码中有一些“很奇怪”的地方,其实就是我们这些python的新手没怎么接触过的“历史用法”。用'`'(backtick) 来代替 repr() 就是一例:
return ''.join([`len([1 for _ in group])` + name for name, group in
groupby(input_str)])
groupby(input_str)])
事实证明backtick比直接调用repr()还要快!
backtick calls: 0.784968s
---------------------------
真的很接近最初的naive算法了,但我有点黔驴技穷了。如果有谁还有更好的办法,请一定发邮件告诉我。
one-liner真的让你想用lambda来代替def来定义这个函数,但其实lambda会更慢……
虽然naive的代码很傻很天真,但是pythonic的函数始终没能追上它。(至少我这种臭水平是没辙了)下次在写对性能有要求的python代码时,一定要记得放弃那些花花招式,先从最简单开始。
没有评论:
发表评论