1 The basic job of any linker
or loader is simple: it binds more abstract names to more concrete names, which
permits programmers to write code using the more abstract names.
2 Overlays,
a technique that let programmers arrange for different parts of a program to
share the same memory, with each overlay loaded on demand when another part of
the program called into it.
3 With the advent of hardware
relocation and virtual memory, linkers
and loaders actually got less complex, since each program could again have an
entire address space. Programs could be
linked to be loaded at fixed addresses, with hardware rather than software
relocation taking care of any load-time relocation. But
computers with hardware relocation invariably run more than one program,
frequently multiple copies of the same program. When a computer runs multiple
instances of one program, some parts of the program are the same among all
running instance (the executable code, in particular), while other parts are
unique to each instance. If the parts that don't change can be separated out
from the parts that do change, the operating system can use a single copy of
the unchanging part, saving considerable storage. Compilers and assemblers were
modified to create object code in multiple sections, with one section for read
only code and another section for writable data, the linker had to be able to
combine all of sections of each type so that the linked program would have all
the code in one place and all of the data in another. This didn't delay address
binding any more than it already was, since addresses were still assigned at
link time, but more work was deferred to the linker to assign addresses for all
the sections. ----- Lacrimosa: That's why the assembled code is devided into "data" and "code" sections, or more sections. So, I get the original answer. :)
阅读(1804) | 评论(0) | 转发(0) |