Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1204515
  • 博文数量: 185
  • 博客积分: 495
  • 博客等级: 下士
  • 技术积分: 1418
  • 用 户 组: 普通用户
  • 注册时间: 2012-09-02 15:12
个人简介

治肾虚不含糖,专注内核性能优化二十年。 https://github.com/KnightKu

文章分类

全部博文(185)

文章存档

2019年(1)

2018年(12)

2017年(5)

2016年(23)

2015年(1)

2014年(22)

2013年(82)

2012年(39)

分类: LINUX

2014-02-18 14:31:49

Even a casual reader of the kernel source code is likely to run into invocations of the ACCESS_ONCE() macro eventually; there are well over 200 of them in the current source tree. Many such readers probably do not stop to understand just what that macro means; a recent discussion on the mailing list made it clear that even core kernel developers may not have a firm idea of what it does. Your editor was equally ignorant but decided to fix that; the result, hopefully, is a reasonable explanation of why ACCESS_ONCE() exists and when it must be used.

The functionality of this macro is actually well described by its name; its purpose is to ensure that the value passed as a parameter is accessed exactly once by the generated code. One might well wonder why that matters. It comes down to the fact that the C compiler will, if not given reasons to the contrary, assume that there is only one thread of execution in the address space of the program it is compiling. Concurrency is not built into the C language itself, so mechanisms for dealing with concurrent access must be built on top of the language; ACCESS_ONCE() is one such mechanism.

Consider, for example, the following code snippet from kernel/mutex.c:


    for (;;) {
	struct task_struct *owner;

	owner = ACCESS_ONCE(lock->owner);
	if (owner && !mutex_spin_on_owner(lock, owner))
	    break;
 	/* ... */

This is a small piece of the adaptive spinning code that hopes to quickly grab a mutex once the current owner drops it, without going to sleep. There is much more to this for loop than has been shown here, but this code is sufficient to show why ACCESS_ONCE() can be necessary.

Imagine for a second that the compiler in use is developed by fanatical developers who will optimize things in every way they can. This is not a purely hypothetical scenario; as Paul McKenney recently : "I have seen the glint in their eyes when they discuss optimization techniques that you would not want your children to know about!" These developers might create a compiler that concludes that, since the code in question does not actually modify lock->owner, it is not necessary to actually fetch its value each time through the loop. The compiler might then rearrange the code into something like:


    owner = ACCESS_ONCE(lock->owner);
    for (;;) {
	if (owner && !mutex_spin_on_owner(lock, owner))
	    break;

What the compiler has missed is the fact that lock->owner is being changed by another thread of execution entirely. The result is code that will fail to notice any such changes as it executes the loop multiple times, leading to unpleasant results. The ACCESS_ONCE() call prevents this optimization happening, with the result that the code (hopefully) executes as intended.

As it happens, an optimized-out access is not the only peril that this code could encounter. Some processor architectures (x86, for example) are not richly endowed with registers; on such systems, the compiler must make careful choices regarding which values to keep in registers if it is to generate the highest-performing code. Specific values may be pushed out of the register set, then pulled back in later. Should that happen to the mutex code above, the result could be multiple references to lock->owner. And that could cause trouble; if the value of lock->owner changed in the middle of the loop, the code, which is expecting the value of its local owner variable to remain constant, could become fatally confused. Once again, the ACCESS_ONCE() invocation tells the compiler not to do that, avoiding potential problems.

The actual implementation of ACCESS_ONCE(), found in , is fairly straightforward:


    #define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))

In other words, it works by turning the relevant variable, temporarily, into a volatile type.

Given the kinds of hazards presented by optimizing compilers, one might well wonder why this kind of situation does not come up more often. The answer is that most concurrent access to data is (or certainly should be) protected by locks. Spinlocks and mutexes both function as optimization barriers, meaning that they prevent optimizations on one side of the barrier from carrying over to the other. If code only accesses a shared variable with the relevant lock held, and if that variable can only change when the lock is released (and held by a different thread), the compiler will not create subtle problems. It is only in places where shared data is accessed without locks (or explicit barriers) that a construct like ACCESS_ONCE() is required. Scalability pressures are causing the creation of more of this type of code, but most kernel developers still should not need to worry about ACCESS_ONCE() most of the time.


( to post comments)


ACCESS_ONCE()

Posted Aug 2, 2012 15:16 UTC (Thu) by tvld (subscriber, #59052) []

"Concurrency is not built into the C language itself"
It is in C11, but not in prior versions.
"Given the kinds of hazards presented by optimizing compilers, one might well wonder why this kind of situation does not come up more often."
I believe we typically do want compilers to optimize code, so I don't understand the negativity towards compiler optimizations in the article (nor towards the folks that write those optimizations). If you're going to synchronize, you need to let the compiler know. It's unrealistic to expect C/C++ compilers to find out exactly which variables might be accessed in a multi-threaded fashion, and nobody would like the performance degradation that conservative assumptions for that would cause.


ACCESS_ONCE()

Posted Aug 2, 2012 15:35 UTC (Thu) by corbet (editor, #1) []

Negativity? I certainly didn't intend any negativity. One can say that compiler optimizations can present subtle hazards in the presence of concurrency without being negative toward optimizations in general. We like optimizations; we just need a way to keep them from creating bugs.


ACCESS_ONCE()

Posted Aug 3, 2012 16:52 UTC (Fri) by daglwn (subscriber, #65432) []

It seems to me the bug is in the kernel. Either lock or lock->owner should be volatile-qualified in its declaration.



ACCESS_ONCE()

Posted Aug 3, 2012 16:55 UTC (Fri) by corbet (editor, #1) []

Nobody said the (potential) bug was anywhere else. But declaring that variable as volatile would deoptimize things much more than is necessary. Rather than do that, the developers involved used ACCESS_ONCE() and things work as they should.


ACCESS_ONCE()

Posted Aug 3, 2012 22:00 UTC (Fri) by daglwn (subscriber, #65432) []

> We like optimizations; we just need a way to keep them from creating bugs.

That's what I was responding to. As a professional compiler developer, it's really irritating to read about compilers "creating bugs" when the problem is actually with undefined, unspecified or implementation-defined behavior in the code.



ACCESS_ONCE()

Posted Aug 7, 2012 10:17 UTC (Tue) by dgm (subscriber, #49227) []

> the problem is actually with undefined, unspecified or implementation-defined behavior in the code.

Compiler developers (professional or otherwise) should take into account that not all developers are as aware as they of each and all language subtleties. Most compiler users use their common sense, not the language specification, to infer what the code does.

If something has the potential of doing harm, and the user is perfectly capable of doing it by hand anyway if really needed, then it's best for the compiler not to do it. I believe that the case at hand is a perfect example of that.


ACCESS_ONCE()

Posted Aug 7, 2012 11:40 UTC (Tue) by gioele (subscriber, #61675) []

> > the problem is actually with undefined, unspecified or implementation-defined behavior in the code.

> Compiler developers (professional or otherwise) should take into account that not all developers are as aware as they of each and all language subtleties. Most compiler users use their common sense, not the language specification, to infer what the code does.

The famous --do-what-I-mean-not-what-I-write compiler switch.

It is a risky thing to do to abandon the written spec to start developing against unwritten common sense stashed in the brain of developers.

This is dangerous because it assumes something that is even more difficult to achieve than making the developers know the spec by heart: it assumes that all the developers around the globe know the same part of the spec and all use it in the same, exact, consistent way.

There will always be differences in the way people use tools and languages. These differences will create conflicts. I think that the best way to resolve conflicts is to point out the relevant part of the spec that dictates what to do. Yes, the spec may be not well written or ambiguous, but those are problems that can be solved. The alternative is to argue in a bug report and let the most vocal person (or the one backed by the biggest implementer) win, ignoring the fact that there may be heaps of people out there using the language in the way the less vocal debater think it should be used.


ACCESS_ONCE()

Posted Aug 8, 2012 13:31 UTC (Wed) by dgm (subscriber, #49227) []

> The famous --do-what-I-mean-not-what-I-write compiler switch.

That would be a good switch, yes. By default the compiler should be in --do-what-I-write-not-what-some-ambiguous-line-of-the-spec-allows-you-to-interpret, though. It would be better for everybody.


ACCESS_ONCE()

Posted Aug 8, 2012 16:51 UTC (Wed) by daglwn (subscriber, #65432) []

That *is* the --do-what-I-mean-not-what-I-write switch.

How can the compiler possibly know what you meant when you wrote something outside the language spec?

Take a very simple but common issue: undefined data. There are countless compiler analyses and transformations that mode code around, reallocate stack space, etc. that cause the undefined variable to have different (garbage) values. What might happen to work one day most assuredly will not when compiled with a compiler 2-3 versions newer.

What should the compiler do? Disable all code motion of and around the offending expression? That would be lunacy.



ACCESS_ONCE()

Posted Aug 9, 2012 3:47 UTC (Thu) by mmorrow (subscriber, #83845) []

Take a very simple but common issue: undefined data. There are countless compiler analyses and transformations that mode code around, reallocate stack space, etc. that cause the undefined variable to have different (garbage) values. What might happen to work one day most assuredly will not when compiled with a compiler 2-3 versions newer.
A good example of this can be seen in GCC's constant propagation implem (tree-ssa-ccp.c):
/*
...
- If an argument has an UNDEFINED value, then it does not affect
  the outcome of the meet operation.  If a variable V_i has an
  UNDEFINED value, it means that either its defining statement
  hasn't been visited yet or V_i has no defining statement, in
  which case the original symbol 'V' is being used
  uninitialized. Since 'V' is a local variable, the compiler
  may assume any initial value for it.
...
*/
And we can see this in action with e.g.
size_t f(int x){size_t a; if(x) a = 42; return a;}
which gives
f:
  movl  $42, %eax
  ret
  .ident  "GCC: (GNU) 4.8.0 20120408 (experimental)"


ACCESS_ONCE()

Posted Aug 10, 2012 7:40 UTC (Fri) by jezuch (subscriber, #52988) []

> Take a very simple but common issue: undefined data.

Undefined data, hah. In Java it is a compilation error to use a variable that is not provably assigned in all possible code paths between declaration and use. I like this feature *a lot* and I'm always surprised that C compilers have such a great difficulty with detecting it. (The JVM verifier does this analysis as well during class loading, so it has to be *fast* as well as correct.)


ACCESS_ONCE()

Posted Aug 10, 2012 14:58 UTC (Fri) by quanstro (guest, #77996) []

the claim above is that the choice is to
(a) follow the standard, or
(b) do something arbitrary.

that is not the choice in this case. the choice is to
either
(a) rearrange the code in optimization, or
(b) leave it as the developer wrote it.

personally, i find this sort of code reorg by compilers
to be problematic as it generally makes code quite hard
to reason about. experienced developers would lift the
assignment out of the loop if it mattered, and it were
legal.



ACCESS_ONCE()

Posted Aug 10, 2012 15:40 UTC (Fri) by daglwn (subscriber, #65432) []

> the claim above is that the choice is to
> (a) follow the standard, or
> (b) do something arbitrary.

This is a false choice. (a) and (b) are the same thing in the presence of undefined/unspecified/implementation-defined behavior. And it's not arbitrary. It's the decision of the compiler engineers.

> personally, i find this sort of code reorg by compilers
> to be problematic as it generally makes code quite hard
> to reason about. experienced developers would lift the
> assignment out of the loop if it mattered, and it were
> legal.

I hear this a lot. Then people get surprised when I show them what the compiler did to their code. Believe me, there is no reason anyone should waste time hand-optimizing code without proof of need. Either they're going to miss a lot of opportunity or they are going to screw things up and make the compiler's job harder.

If a developer were to hand-optimize code to achieve the same performance result, the source code would be unmaintainable.

We want optimizing compilers. I can't believe anyone would suggest otherwise.



ACCESS_ONCE()

Posted Aug 12, 2012 16:16 UTC (Sun) by quanstro (guest, #77996) []

in the example given in the op, there was no reason for the read to be in the
loop, except if it might change. the compiler made the assumption that the
coder was wrong. that might not be a useful assumption.

as i see it, the trade-off here is non-standard constructions, and the
principle of least surprise for performance.

i'm not convinced of the claim that this is always a win.

the compiler i use on a daily basis does not do strength reduction or optimize
away indirection. it assumes you know what you're doing. i don't notice that
it is slower. i also don't have to worry that the compiler will break my
drivers by "optimizing" them.

(never mind that with modern intel cpus, strength reduction can be a loss due
to microop caching.)

i think this is a good trade off for my case because it avoids obtuse, and
non-portable constructions that can be hard to remember to apply. that is,
for most code, developer time is more expensive than run time.

just my two cents, and i realize that there are cases in the linux kernel
where complex macros need this sort of optimization. but perhaps that's
complexity begetting itself.



ACCESS_ONCE()

Posted Aug 13, 2012 12:05 UTC (Mon) by nye (guest, #51576) []

>in the example given in the op, there was no reason for the read to be in the
> loop, except if it might change. the compiler made the assumption that the
> coder was wrong. that might not be a useful assumption.

No, the coder *was* wrong, and the assumption is *always correct in standard C*. That's the point. The programmer might have assumed semantics which are not C, but the compiler merely assumed that the programmer was writing in C, not writing in some unspecified language that looks a lot *like* C and exists only in the programmer's head.

It is axiomatic that a valid optimisation (ie. one which precisely follows C semantics, and any which don't are buggy and tend to be quickly fixed) cannot break correct valid C; if code breaks then it is because the programmer has made incorrect assumptions about the exact meaning *in C* of what they're writing.

If your variable might change between accesses, then you need to tell the compiler that, because it is not the case in the standard C model, which is why there's a keyword existing specifically for that purpose.


ACCESS_ONCE()

Posted Aug 16, 2012 14:52 UTC (Thu) by quanstro (guest, #77996) []

i agree with what you say, but that's not the point i'm trying to make.
(and btw, i don't think that ACCESS_ONCE is standard c. nor can the
kernel be compiled with an arbitrary compiler; it depends on gcc.)

for me, making code safe from all possible according-to-hoyle legal
transformations of the code is not really interesting or useful.
i'd much rather focus on having a tractable, easy-to-use programming
environment.

if restricting the compiler from making some theoretically legal
code transformations reduces bugs and generally makes life easier,
then why not do it?

as it is i believe there are some gcc optimizations that can break the
kernel.



ACCESS_ONCE()

Posted Aug 19, 2012 19:38 UTC (Sun) by PaulMcKenney (subscriber, #9624) []

ACCESS_ONCE() is simply the macro called out in the article, which simply uses the volatile keyword in a cast, which is part of standard C.


ACCESS_ONCE()

Posted Aug 10, 2012 17:33 UTC (Fri) by nix (subscriber, #2304) []

personally, i find this sort of code reorg by compilers to be problematic as it generally makes code quite hard to reason about.
I'm curious. Why don't you say the same about register allocation? Combined with stack spilling, that can often have the same effect as code motion. How do you plan to get rid of that?

(I've seen this from various safety-critical embedded people too: they want the compiler to 'not optimize'. I've tried pointing out that this is a meaningless request, that translation necessarily implies many of the same transformations that optimization does, but they never seem to get it. What they generally *mean* is that they want the smallest possible number of transformations -- or perhaps that they want the transformations to be guaranteed bug-free.)


ACCESS_ONCE()

Posted Aug 10, 2012 15:35 UTC (Fri) by daglwn (subscriber, #65432) []

C compilers detect it all the time. It is a trivial analysis. Users tend to ignore the warnings, however.



ACCESS_ONCE()

Posted Aug 12, 2012 18:52 UTC (Sun) by Wol (guest, #4433) []

Which is why, when I gave a bunch of novice programmers instruction about how to maintain code, I said "the standard is (a) always turn warnings to max and (b) always fix or otherwise understand *every* warning".

We had a bunch of warnings we couldn't get rid of, hence it didn't say "fix all warnings", but "explain it away" is just as effective, if less satisfying.

Cheers,
Wol


ACCESS_ONCE()

Posted Aug 13, 2012 8:02 UTC (Mon) by jezuch (subscriber, #52988) []

> C compilers detect it all the time. It is a trivial analysis. Users tend to ignore the warnings, however.

It is. And they do. And every time I see someone fixing a "use of undefined variable" warning by initializing it to zero at declaration point, I cringe. I've seen it in stable updates to the kernel a lot...


ACCESS_ONCE()

Posted Aug 13, 2012 16:31 UTC (Mon) by daglwn (subscriber, #65432) []

Absolutely. One needs to first understand _why_ the warning is there before fixing it. Warnings don't replace understanding.



ACCESS_ONCE()

Posted Aug 13, 2012 9:38 UTC (Mon) by etienne (subscriber, #25256) []

It is not trivial, or even possible, to detect if a variable is only initialised and used when another variable is set to a special value.
Something like (very simplified):
extern unsigned loglevel;

void fct (void) {
int avar;
if (loglevel = 4) avar = 10;
increase_loglevel();
do_some_unrelated_stuff();
if (loglevel == 5) printf ("%d\n", avar);
}


ACCESS_ONCE()

Posted Aug 13, 2012 16:34 UTC (Mon) by daglwn (subscriber, #65432) []

It is not trivial to get an exact and accurate answer in the general case, true, but I was assuming we were talking about the common case of locally-declared variables.

Still, even in this case the compiler could warn about it even if it doesn't know for sure. The code certainly looks suspicious and a warning would be appropriate. False positives are just fine if they are limited in number. The compiler can provide directives to suppress them if the programmer knows it's not a problem.

Note that gcc does just this. It warns that variables *might* be uninitialized.



ACCESS_ONCE()

Posted Aug 13, 2012 16:50 UTC (Mon) by nybble41 (subscriber, #55106) []

Fortunately, in your example avar is always initialized to 10. :)

Assuming "loglevel = 4" was replaced with "loglevel == 4", I would expect the compiler to generate a warning in this case, since it can't _prove_ that avar was initialized before the printf() call. I would also hope that the compiler would take advantage of the fact that avar is either 10 or undefined to simply set it to 10 regardless of loglevel, for code size and performance reasons if nothing else.

In cases like this, IMHO, it would be better to use a common flag rather than testing loglevel multiple times:

void fct (void) {
int avar;
bool use_avar;
use_avar = (loglevel == 4);
if (use_avar) avar = 10;
increase_loglevel();
do_some_unrelated_stuff();
if (use_avar) printf ("%d\n", avar);
}

I'm not sure whether the compiler can follow this any better than before, so there may still be a warning, but at least the coupling is visible to anyone reading the code, and doesn't depend on the implementation of external functions.


ACCESS_ONCE()

Posted Aug 13, 2012 9:56 UTC (Mon) by dgm (subscriber, #49227) []

> How can the compiler possibly know what you meant when you wrote something outside the language spec?

The code we are arguing about, the one with ACCESS_ONCE(), is NOT outside the spec in any way, is it?


ACCESS_ONCE()

Posted Aug 13, 2012 12:11 UTC (Mon) by nye (guest, #51576) []

>> How can the compiler possibly know what you meant when you wrote something outside the language spec?

> The code we are arguing about, the one with ACCESS_ONCE(), is NOT outside the spec in any way, is it?

No it isn't. The point is that, absent the ACCESS_ONCE() macro, the assumption that the compiler shouldn't pull that access out of the loop is what's outside the language spec, because the spec says it can safely do so.


ACCESS_ONCE()

Posted Aug 17, 2012 10:57 UTC (Fri) by dgm (subscriber, #49227) []

> the spec says it can safely do so.

But does the spec say it _has_ to do so? Does it more good or harm?

Not everything that is allowed is good. For example, the compiler is (in theory) allowed to do anything it wants when presented with code that raises undefined behavior. Anything. "rm -rf /" for instance would be 100% correct. Are GCC developers planing this "feature" for gcc 4.8? Of course not, that would be stupid when you could be playing nethack instead...


ACCESS_ONCE()

Posted Aug 13, 2012 16:35 UTC (Mon) by daglwn (subscriber, #65432) []

Yes, it is. It uses typeof().

But ignoring that, nye has the more useful response. :)



ACCESS_ONCE()

Posted Aug 17, 2012 10:37 UTC (Fri) by dgm (subscriber, #49227) []

typeof() is clearly something acceptable in gcc's C dialect. Or have they removed their own extension without the word noticing?


ACCESS_ONCE()

Posted Aug 20, 2012 10:07 UTC (Mon) by etienne (subscriber, #25256) []

The thing is, can the type returned by typeof() be modified by adding the "volatile" keyword, how much can it be modified (how to remove volatility, possible to change the sign-ness by adding signed/unsigned)...


ACCESS_ONCE()

Posted Aug 7, 2012 19:42 UTC (Tue) by daglwn (subscriber, #65432) []

Oh, we have plenty of cases where we try to cater to users who stretch (to put it mildly) the standard. But one must recognize that the programmer is no longer programming in said language. He or she is programming in some other, ill-defined language.

My point is that a bug caused by the programmer not adhering to the language standards is not caused by the compiler. The compiler team may be willing to work around the problem but that doesn't mean the code ain't broke.

> If something has the potential of doing harm, and the user is perfectly
> capable of doing it by hand anyway if really needed, then it's best for
> the compiler not to do it.

The problem is that the compiler has *no way* to know if the code is potentially harmful because the code is doing something outside the definition of the language. The problem ACCESS_ONCE is trying to solve is a perfect example. The compiler doesn't know which parts of code will be operating in a multi-threaded environment, much less which pieces of memory are shared. We have "volatile" to mark variables in a way that happens to work for our current threading models. But it is the only thing we have in C right now.

Compiler writers generally don't want to limit legal transformations because those transformations can help the vast majority of programmers who don't have whatever specialized problem one programmer might complain about.




ACCESS_ONCE()

Posted Aug 8, 2012 14:30 UTC (Wed) by dgm (subscriber, #49227) []

> The problem is that the compiler has *no way* to know if the code is potentially harmful

It's not the code that it's harmful, but the transformation the compiler does in the optimization. To put it another way: the compiler chooses to do something that is NOT what the code says. It does so because it assumes it's safe. Because there's no way it can be sure if the transformation is safe (or not!), it should not be doing it.

> The compiler doesn't know which parts of code will be operating in a multi-threaded environment

Isn't it common sense that any code compiled today _can_ be used in a multi-threaded environment?



ACCESS_ONCE()

Posted Aug 8, 2012 16:56 UTC (Wed) by daglwn (subscriber, #65432) []

> It's not the code that it's harmful, but the transformation the compiler
> does in the optimization.

No. The code is the specification of what the programmer wants to happen. If the programmer doesn't mark something "volatile" when it should be, there's no way the compiler can automatically infer that information.

> Because there's no way it can be sure if the transformation is safe (or
> not!), it should not be doing it.

You've just killed all compiler optimization.

> Isn't it common sense that any code compiled today _can_ be used in a
> multi-threaded environment?

And any variable might be shared. So now the compiler must assume everything is volatile.

You've just killed all compiler optimization.



ACCESS_ONCE()

Posted Aug 13, 2012 10:07 UTC (Mon) by dgm (subscriber, #49227) []

> And any variable might be shared.

Threads are supposed to share all memory, aren't they?

>You've just killed all compiler optimization.

Only the unsafe ones. There are plenty of opportunities for optimization, but I don't think this is an example of one.


ACCESS_ONCE()

Posted Aug 13, 2012 16:41 UTC (Mon) by daglwn (subscriber, #65432) []

> Threads are supposed to share all memory, aren't they?

It depends on the threading model and since the standard doesn't specify one, the compiler cannot know in the general case.

Now, gcc has for example the -pthreads switch that tells it something about what to expect so that can help. But the kernel build system almost certainly doesn't use that.

> There are plenty of opportunities for optimization, but I don't think
> this is an example of one.

When one restricts what is know about memory accesses, the compiler kills a *lot* of compiler transformation. Much much more than one would think. Spill code gets a lot more inefficient, for example, which you wouldn't expect to happen since spills are entirely local. But it does because the compiler cannot do certain peeps that can help reduce the amount of spill code.

The "big" transformations like loop restructuring, vectorization and the like are almost impossible if the behavior of memory can't be determined. These are no more "unsafe" than any other transformation. The compiler won't do them if it doesn't know it's safe according to the standard.



ACCESS_ONCE()

Posted Aug 13, 2012 16:58 UTC (Mon) by jwakely (subscriber, #60262) []

> Now, gcc has for example the -pthreads switch that tells it something about what to expect so that can help.

Except that for many platforms it's equivalent to '-D_REENTRANT -lpthread' and so only affects the preprocessor and linker, which doesn't tell the compiler anything.


ACCESS_ONCE()

Posted Aug 4, 2012 12:49 UTC (Sat) by butlerm (subscriber, #13312) []

>It seems to me the bug is in the kernel. Either lock or lock->owner should be volatile-qualified in its declaration.

That problem with that idea is that practically everything in the kernel would have to be declared volatile. That is inefficient when it doesn't matter (because a lock has been acquired), and misleading otherwise. Linus made some comments along these lines:


ACCESS_ONCE()

Posted Aug 2, 2012 17:08 UTC (Thu) by PaulMcKenney (subscriber, #9624) []

I believe that the features in C11 will prove quite helpful, though opinions do vary rather widely (disclaimer: I was involved in the C11 process). But please keep in mind that there has been concurrent code written in C for some decades: started in 1983, which was almost three decades ago, and Sequent was probably not the first to write parallel code in C. Linux added SMP support in the mid-1990s, which is well over a decade ago.

So what did us parallel programmers do during the decades before C11? Initially, we relied on the fact that early C compilers did not do aggressive optimizations, though register reloads did sometimes trip us up. More recently, we have relied on non-standard extensions (e.g., the barrier() macro) and volatile casts. Those of us living through that time (and I am a relative newcomer, starting shared-memory parallel programming only in 1990) have been shifting our coding practices as the compiler optimizations become more aggressive.

And I am very sorry to report that I really have seen compiler writers joyfully discuss how new optimizations will break existing code. In their defense, yes, the code that they were anticipating breaking was relying on undefined behavior. Then again, until production-quality C11 compilers become widely available, all multithreaded C programs that allow concurrent read-write access to any given variable will continue to rely on undefined behavior.


ACCESS_ONCE()

Posted Aug 2, 2012 19:20 UTC (Thu) by tvld (subscriber, #59052) []

Yes we've relied on assumptions about compiler implementations. And that sure works, but it isn't ideal, and people should be aware of that. With memory models (that specify multi-threaded executions) we can at least reduce that to assuming that the compiler implements the model correctly. Compiler and application writers need to have a common understanding of the model, but at least we have formalizations of the model.
Those of us living through that time (and I am a relative newcomer, starting shared-memory parallel programming only in 1990) have been shifting our coding practices as the compiler optimizations become more aggressive.
Considering the example in the article, hoisting a load out of a loop is not something that I'd call an aggressive optimization.
And I am very sorry to report that I really have seen compiler writers joyfully discuss how new optimizations will break existing code. In their defense, yes, the code that they were anticipating breaking was relying on undefined behavior.
And I wouldn't have been joyful about that either. But repeating stereotypes and anecdotal "evidence" about this or that group of people doesn't help us at all. IMHO, the point here shouldn't be kernel vs. compilers or such, but pointing out the trade-offs between compiler optimizations of single-threaded pieces of code vs. synchronizing code, why we have to make this distinction, how we can draw the line, what memory models do or can't do, etc. Shouldn't we be rather discussing how we can get compilers to more quickly reach production-quality support of memory models, and how to ensure and test that? Or starting from the C/C++ memory model, whether there are limitations of it that are bad for the kernel, and whether compilers could offer variations that would be better (e.g., with additional memory orders)?


ACCESS_ONCE()

Posted Aug 2, 2012 21:29 UTC (Thu) by PaulMcKenney (subscriber, #9624) []

Yep, having the compiler understand concurrency is a good thing, no argument from me. If you wish to argue that point, you will need to argue it with someone else. ;-)

And indeed, there are any number of optimizations that are profoundly unaggressive by 2012 standards. However, the fewer registers the target system has (and yes, I have used single-register machines), the less memory the compiler has available to it (I have used systems with 4Kx12bits of core, and far smaller systems were available), and the smaller the ratio of memory latency to CPU clock period (1-to-1 on many systems 30 years ago), the less likely the compiler will do certain optimizations. All of these system attributes have changed dramatically over the past few decades, which in turn has dramatically changed the types of optimizations that compilers commonly carry out.

I, too, favor looking for solutions. But few people are going to look for a solution until they believe that there is a problem. The fact that you already realize that there is a problem is a credit to you, but it is unwise to assume that others understand (or even care about) that problem. And then there is of course the likely disagreements over the exact nature of the problem, to say nothing of over the set of permissible solutions.


ACCESS_ONCE()

Posted Aug 3, 2012 11:29 UTC (Fri) by tvld (subscriber, #59052) []

I agree that the underlying problem might not be easy to see. But exactly this is a reason for not wrapping it in jokes and other things that can distract. Also, in this case here, the funny bit doesn't even hint at the problem, so it won't help to explain.

And I've too often seen people actually understand such (supposedly) funny comments literally (compilers are evil! they don't read my mind!) to really like such comments. And yes, I realize that this is just anecdotal evidence :)


ACCESS_ONCE()

Posted Aug 3, 2012 16:53 UTC (Fri) by PaulMcKenney (subscriber, #9624) []

I cannot say that I found your concerns convincing, and your further commentary isn't helping to convince me.

So rather that continue that sterile debate, let me ask a question on the examples in the article. In C11/C++11, would it be sufficient to make the ->owner field be atomic with memory_order_relaxed accesses, or would the volatile cast still be necessary to prevent the compiler from doing those optimizations?


ACCESS_ONCE()

Posted Aug 3, 2012 20:59 UTC (Fri) by tvld (subscriber, #59052) []

That's a question about forward progress, which isn't specified in much detail in at least C++11.

On the one hand, mo_relaxed loads can read from any write in the visible sequence of side effects. So this would allow to hoist the load. Same for mo_acquire loads actually, I believe.

On the other hand, there's C++11 1.10.2 and 1.10.25. You could interpret those as saying that the atomics in the abstract machine would eventually load the most recent value (in modification order). Assuming that the standard's intent is that this is an additional constraint on reads-from, then compilers wouldn't be allowed to hoist the load out of loops. I didn't see an equivalent of 1.10.2 / 1.10.25 in C11. (You said you've been involved in C11; is there any, or if not, why not?)

Either way, I bet you've thought this through before. So what's your detailed answer to your question? What's your suggestion for how to specify progress guarantees in more detail in the standards?


ACCESS_ONCE()

Posted Aug 3, 2012 22:13 UTC (Fri) by PaulMcKenney (subscriber, #9624) []

C++11's 1.10.24 says that any thread may be assumed to eventually either terminate, invoke a library I/O function, access or modify a volatile object, or perform a synchronization or atomic operation. This was a late add, and it replaced some less-well-defined language talking about loop termination. I could read this as saying that the compiler is not allowed to hoist atomic operations out of infinite loops, in other words, that load combining is allowed, but the implementation is only allowed to combine a finite number of atomic loads. How would you interpret it?

I believe that this wording will be going into C11 as well, but will find out in October. Not so sure about 1.10.2 -- C gets to support a wider variety of environments than C++, so is sometimes less able to make guarantees.

How about the second issue raised in the original article? If an atomic variable is loaded into a temporary variable using a memory_order_relaxed load, is the compiler allowed to silently re-load from that same atomic variable?

My approach would be to continue using things like ACCESS_ONCE() until such time as all the compiler people I know of told me that it was not necessary, and with consistent rationales for why it was not necessary. By the way, this is one of the reasons I resisted the recent attempt to get rid of the "volatile" specifier for atomic

阅读(2122) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~