Reduce, reuse, recycle
简化,重用,再生
When dealing with Lua resources, we should apply the same three R's promoted for the Earth's resources.
当处理Lua资源时,我们应当用上同针对地球资源提出的3R倡议一样的(东西)。
Reduce is the simplest alternative. There are several ways to avoid the need for new objects. For instance, if your program uses too many tables, you may consider a change in its data representation. As a simple example, consider that your program manipulates polylines. The most natural representation for a polyline in Lua is as a list of points, like this:
简化时最简单的选择。有若干种方法避免对新对象的需要。例如,如果你的程序用了太多的表,你可考虑在其数据描述方面改造。作为简单的例子,考虑你的程序处理折线。在Lua中对折线来说最自然的描述是作为一列点,就像这样:
polyline = { { x = 10.3, y = 98.5 },
{ x = 10.3, y = 18.3 },
{ x = 15.0, y = 98.5 },
...
}
Although natural, this representation is not very economic for large polylines, as it needs a table for each single point. A first alternative is to change the records into arrays, which use less memory:
尽管自然,这种描述对于大型折线并不经济,因为对与每个单点都需要一个表。最初的选择是变记录为数组,后者使用更少的内存:
polyline = { { 10.3, 98.5 },
{ 10.3, 18.3 },
{ 15.0, 98.5 },
...
}
For a polyline with one million points, this change reduces the use of memory from 95 KBytes to 65 KBytes. Of course, you pay a price in readability: p[i].x is easier to understand than p[i][1].
对于具有一百万个点的折线,这个变化减少了内存的使用,从95 KBytes到65 KBytes。当然,在可读性上付出了代价:p[i].x比p[i][1]更易理解。
A yet more economic alternative is to use one list for the x coordinates and another one for the y coordinates:
更经济的选择是用一个列表作为x坐标,另一个列表作为y坐标:
polyline = { x = { 10.3, 10.3, 15.0, ...},
y = { 98.5, 18.3, 98.5, ...}
}
The original p[i].x now is p.x[i]. With this representation, a one-million-point polyline uses only 24 KBytes of memory.
原来的p[i].x现在成了p.x[i]。利用这种描述,一条有一百万个点的折线只用了24 KBytes内存。
A good place to look for chances of reducing garbage creation is in loops. For instance, if a constant table is created inside a loop, you can move it out the loop, or even out of the function enclosing its creation. Compare:
寻找减少垃圾产生的机会的一个好地方是循环里面。例如,如果一个常量表在循环内被创建,你可把它移到循环外面,或这甚至是封装了其创建的函数的外面。比较:
----------------------- Page 10-----------------------24
function foo (...)
for i = 1, n do
local t = {1, 2, 3, "hi"}
-- do something without changing 't' 做一些不改变't'的事情
...
end
end
local t = {1, 2, 3, "hi"} -- create 't' once and for all 创建't'一次并到处使用
function foo (...)
for i = 1, n do
-- do something without changing 't' 做一些不改变't'的事情
...
end
end
The same trick may be used for closures, as long as you do not move them out of the scope of the variables they need. For instance, consider the following function:
同样的技巧可被用于闭包,只要你不把它们移出它们所需要的变量的作用域。例如,考虑下面的函数:
function changenumbers (limit, delta)
for line in io.lines() do
line = string.gsub(line, "%d+", function (num)
num = tonumber(num)
if num >= limit then return tostring(num + delta) end
-- else return nothing, keeping the original number
end)
io.write(line, "\n")
end
end
We can avoid the creation of a new closure for each line by moving the inner function outside the loop:
我们可通过把内部函数移到循环外面来避免新闭包的创建:
function changenumbers (limit, delta)
local function aux (num)
num = tonumber(num)
if num >= limit then return tostring(num + delta) end
end
for line in io.lines() do
line = string.gsub(line, "%d+", aux)
io.write(line, "\n")
end
end
However, we cannot move aux outside function changenumbers, because there aux cannot access limit and delta.
但是,我们不能把aux移到函数changenumbers的外面,因为在那儿aux不能访问limit和delta。
----------------------- Page 11-----------------------25
For many kinds of string processing, we can reduce the need for new strings by working with indices over existing strings. For instance, the string.find function returns the position where it found the pattern, instead of the match. By returning indices, it avoids creating a new (sub)string for each successful match. When necessary, the programmer can get the match substring by calling string.sub.4
对于很多种字符串处理,我们可通过操作现存字符串上的索引来减少对新字符串的需要。例如,函数string.find返回其找到模式的位置,而不是匹配(的字符串)。通过返回索引,它避免了为每个成功的匹配创建新的(子)字符串。程序员可在需要时通过调用string.sub来得到匹配的子字符串。
When we cannot avoid the use of new objects, we still may avoid creating these new objects through reuse. For strings reuse is not necessary, because Lua does the job for us: it always internalizes all strings it uses, therefore reusing them whenever possible. For tables, however, reuse may be quite effective. As a common case, let us return to the situation where we are creating new tables inside a loop. This time, however, the table contents are not constant. Nevertheless, frequently we still can reuse the same table in all iterations, simply changing its contents. Consider this chunk:
当不可避免使用新对象时,我们仍然可以透过重用来避免创建这些新对象。字符串的重用是不需要的,因为Lua为我们做了:它总是内化其用到的所有字符串,因此只要可能就重用它们。然而,对表来说,重用会非常有效。作为一个常见的例子,我们返回在循环中创建新表的情形。这次表的内容却不是不变的。不过,我们常常仍然能在所有迭代中重用同一个表,只是改变其内容。考虑这个程序块:
local t = {}
for i = 1970, 2000 do
t[i] = os.time({year = i, month = 6, day = 14})
end
The next one is equivalent, but it reuses the table:
下一个是等价的,但是它重用了表:
local t = {}
local aux = {year = nil, month = 6, day = 14}
for i = 1970, 2000 do
aux.year = i
t[i] = os.time(aux)
end
A particularly effective way to achieve reuse is through memoizing. The basic idea is quite simple: store the result of some computation for a given input so that, when the same input is given again, the program simply reuses that previous result.
实现重用的一个特别有效地方法是通过memoizing。基本想法非常简单:为给定的输入存储某些计算的结果,再次给出相同的输入时,程序只需重用之前的结果。
LPeg, a new package for pattern matching in Lua, does an interesting use of memoizing. LPeg compiles each pattern into an internal representation, which is a "program" for a parsing machine that performs the matching. This compilation is quite expensive, when compared with matching itself. So, LPeg memoizes the results from its compilations to reuse them. A simple table associates the string describing a pattern to its corresponding internal representation.
LPeg是个新的Lua模式匹配包,它对memoizing的使用很有意思。LPeg把每个模式编译成内部表示——一个用于解析机器执行匹配的“程序”。该编译与匹配自身相比代价很高。所以,LPeg缓存(memoizes)了编译结果以重用它们。一个简单的表将描述一个模式的字符串关联到它相应的内部表示。
A common problem with memoizing is that the cost in space to store previous results may outweigh the gains of reusing those results. To solve this problem in Lua, we can use a weak table to keep the results, so that unused results are eventually removed from the table.
memoizing的一个常见问题是存储先前结果的空间花费可能超出重用这些结果的收益。在Lua重要解决该问题,我们可用若引用表来保存结果,这样未使用的结果最终会从表中移除。
In Lua, with higher-order functions, we can define a generic memoization function:
在Lua中,我们可用高阶函数定义个通用的memoization函数:
-----------------------
4 It would be a good idea for the standard library to have a function to compare substrings, so that we could check specific values inside a string without having to extract that value from the string (thereby creating a new string).
如果标准库有个函数用来比较子字符串,以便我们不必从字符串提取出那个值(因而创建一个新字符串)就可以检查字符串中的特定值,这会是个好注意。
----------------------- Page 12-----------------------26
function memoize (f)
local mem = {} -- memoizing table memoizing表
setmetatable(mem, {__mode = "kv"}) -- make it weak 让它成为弱引用的
return function (x) -- new version of 'f', with memoizing ‘f’的新版,带memoizing
local r = mem[x]
if r == nil then -- no previous result? 没有前一个结果?
r = f(x) -- calls original function 调用原始函数
mem[x] = r -- store result for reuse 为重用存储结果
end
return r
end
end
Given any function f, memoize(f) returns a new function that returns the same results as f but memoizes them. For instance, we can redefine loadstring with a memoizing version:
给定任何函数f,memoize(f)返回一个新函数,它返回同f一样的结果,除了缓存(memoizes)它们。例如,我们可重定义带memoizing版本的loadstring:
loadstring = memoize(loadstring)
We use this new function exactly like the old one, but if there are many repeated strings among those we are loading, we can have a substantial performance gain.
我们用这个新函数(的方式)同旧版的完全一样,另外,如果我们要加载的那些字符串中有很多重复的,我们会得到实质的性能收益。
If your program creates and frees too many coroutines, recycling may be an option to improve its performance. The current API for coroutines does not offer direct support for reusing a coroutine, but we can circumvent this limitation. Consider the next coroutine:
如果你的程序创建和释放太多的协程,再循环或许是个提升性能的选择。当前的协程API并不直接支持重用协程,但是我们可绕过这个限制。考虑下一个协程:
co = coroutine.create(function (f)
while f do
f = coroutine.yield(f())
end
end
)
This coroutine accepts a job (a function to run), runs it, and when it finishes it waits for a next job.
该协程接受一个作业(要运行的函数),运行它,并在结束时等待下一个作业。
Most recycling in Lua is done automatically by the garbage collector. Lua uses an incremental garbage collector. That means that the collector performs its task in small steps (incrementally) interleaved with the program execution. The pace of these steps is proportional to memory allocation: for each amount of memory allocated by Lua, the garbage collector does some proportional work. The faster the program consumes memory, the faster the collector tries to recycle it.
Lua中的多数再生是由垃圾收集器自动执行的。Lua使用一个增量的垃圾收集器。那表示收集器与程序交叉运行,以很小的步子(增量地)执行其任务。这些步幅与内存的分配是相称的:对于每个由Lua分配的内存的数量,垃圾收集器执行相应的工作。程序消耗内存越快,收集器使之再生就越快。
If we apply the principles of reduce and reuse to our program, usually the collector will not have too much work to do. But sometimes we cannot avoid the creation of large amounts of garbage and the collector may become too heavy. The garbage collector in Lua is tuned for average programs, so that it performs reasonably well in most applications. However, sometimes we can improve the
----------------------- Page 13-----------------------27
performance of a program by better tunning the collector for that particular case.
如果我们把简化和重用的原则用在我们的程序上,收集器将不会有很多工作要做。但是有时候我们不可避免创建大量垃圾,收集器会变得非常繁忙。Lua中的垃圾收集器为一般的程序做了调整,因此它在多数应用中表现的相当好。可是,有时我们能通过为特例更好地调整收集器来改善程序性能。
We can control the garbage collector through function collectgarbage, in Lua, or lua_gc, in C. Both offer basically the same functionality, although with different interfaces. For this discussion I will use the Lua interface, but often this kind of manipulation is better done in C.
我们可透过Lua中的函数collectgarbage或C中的lua_gc来控制垃圾收集器。尽管接口不同,二者提供基本上一样的功能。对于本次讨论,我将使用Lua接口,但是通常这种类型的处理用C做更好。
Function collectgarbage provides several functionalities: it may stop and restart the collector, force a full collection cycle, force a collection step, get the total memory in use by Lua, and change two parameters that affect the pace of the collector. All of them have their uses when tunning a memory-hungry program.
函数collectgarbage提供若干功能:它可停止和重启收集器,强制(执行)一次完整的收集循环,强制(执行)一个收集步骤,获得Lua使用的内存总量,以及改变影响收集器步幅的两个参数。当调整渴求内存的程序时它们各有用处。
Stopping the collector "forever" may be an option for some kinds of batch programs, which create several data structures, produce some output based on those structures, and exit (e.g., compilers). For those programs, trying to collect garbage may be a waste of time, as there is little garbage to be collected and all memory will be released when the program finishes.
对于某些类型的批处理程序——它们创建一些数据结构,基于那些结构产生一些输出,然后退出(例如编译器)——来说,“永久”停止收集器或许是个好选择。对于那些程序,试图收集垃圾可能是浪费时间,因为只存在少量的垃圾并且程序结束时所有内存将被释放。
For non-batch programs, stopping the collector forever is not an option. Nevertheless, those programs may benefit from stopping the collector during some time-critical periods. If necessary, the program may get full control of the garbage collector by keeping it stopped at all times, only allowing it to run by explicitly forcing a step or a full collection. For instance, several event-driven platforms offer an option to set an idle function, which is called when there are no other events to handle. This is a perfect time to do garbage collection. (In Lua 5.1, each time you force some collection when the collector is stopped, it automatically restarts. So, to keep it stopped, you must call collectgarbage("stop") immediately after forcing some collection.)
对于非批处理程序,永久停止收集器并非好的选择。不过,那些程序可能受益于在某些时间要求严格的时期停止收集器。如果需要,程序可完全控制垃圾收集器,方法是让它一直停止、仅仅通过强制一个步骤或一次完整收集来允许它 运行。例如,一些事件驱动的平台提供一个设置空闲函数的选择,它在没有其他事件需要处理时被调用。这是执行垃圾收集的理想时间。(在Lua5.1中,当收集器停止时,每次你强制收集,它自动重启。所以,要让它一直停止,你必须在强制收集后立刻调用collectgarbage("stop"))。
Finally, as a last resort, you may try to change the collector parameters. The collector has two parameters controlling its pace. The first one, called pause , controls how long the collector waits between finishing a collection cycle and starting the next one. The second parameter, called stepmul (from step multiplier), controls how much work the collector does in each step. Roughly, smaller pauses and larger step multipliers increase the collector's speed.
最后,作为最后的手段,你可能试图改变收集器的参数。收集器控制其步幅的参数有两个。第一个叫做pause(停顿?),控制收集器在完成一次收集周期和启动下一次之间等待多久。第二个参数叫做stepmul(步幅?)(来自step multiplier),控制收集器在每个步骤中做多少工作。
The effect of these parameters on the overall performance of a program is hard to predict. A faster collector clearly wastes more CPU cycles per se; however, it may reduce the total memory in use by a program, which in turn may reduce paging. Only careful experimentation can give you the best values for those parameters.
那些参数在程序的整体性能上的作用难以预计。更快的收集器本身无疑会浪费更多CPU周期;然而,它可能减少程序占用的内存总量,进而减少页面调度。只有仔细地实验才能给你那些参数的最优值。
阅读(1112) | 评论(0) | 转发(0) |