[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
|
|
Subscribe / Log in / New account

What Every C Programmer Should Know About Undefined Behavior #3/3

The final segment of the LLVM blog's series on undefined behavior is up. "In this article, we look at the challenges that compilers face in providing warnings about these gotchas, and talk about some of the features and tools that LLVM and Clang provide to help get the performance wins while taking away some of the surprise."

to post comments

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 0:29 UTC (Sun) by jd (guest, #26381) [Link] (4 responses)

Unsafe optimizations aught to be a solvable problem. Once the executable code has been generated, it should be possible to say in a well-defined subset of cases whether the resultant binary is trying to perform a nonsense operation.

If the code has symbols that state what line(s) of source are involved and what optimization(s) were used, you could always give a really verbose warning to say that this specific combination is potentially invalid.

Or, for cleverer results, make the compilation multi-pass. Where a block of intermediate source produces bad results, disable the most likely offending optimizations and try again until either there's nothing left or the tests say that the compilation looks ok.

The disadvantage of the former is that the warnings would be horribly verbose and drown developers in information. The disadvanatge of the latter is that it's slow and of indeterminate duration for relatively marginal gains in what you hope are fringe cases (where the original code is good but the compiler's herustics screw up).

In general, fixing the developers is easier than fixing the freakier cases.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 1:52 UTC (Sun) by welinder (guest, #4699) [Link] (3 responses)

> Once the executable code has been generated, it should be possible to
> say in a well-defined subset of cases whether the resultant binary is
> trying to perform a nonsense operation.

That would be a solution to the halting problem. So, no -- not possible.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 8:36 UTC (Sun) by nteon (subscriber, #53899) [Link]

well, patches welcome of course :)

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 15:19 UTC (Mon) by HelloWorld (guest, #56129) [Link]

> That would be a solution to the halting problem.
How so?

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 15:53 UTC (Mon) by vonbrand (guest, #4458) [Link]

It is possible to find a set of conditions in which nothing strange happens (by simulating the program running in each case, or some sophisticated variation thereof). The problem is that doing so is a lot of work, and/or the resulting set of "safe" uses is disappointingly small.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 8:57 UTC (Sun) by oak (guest, #2786) [Link] (2 responses)

After noting the comment "the C99 standard lists 191 different kinds of undefined behavior" in http://blog.regehr.org/archives/213

I took a look at those undefined behaviors in the c99 standard specification.

One of the undefined behaviors is: "For a call to a function without a function prototype in scope, the number of arguments does not match the number of parameters".

GCC interprets "name()" as a K&R function name. So, isn't "bar()" in this code:
------ code1.h --------
extern int foo();
------ code1.c --------
#include "code1.h"
int foo() {
return 1;
}
------ code2.c --------
#include "code1.h"
int bar() {
return foo(2);
}
-----------------------
undefined behavior according to the c99 standard and therefore *the whole program* invalid?

Why GCC doesn't give even a warning about this when one uses "-Wall -Wextra -pedantic --std=c99"? [1]

[1] http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48091

The correct ANSI-C prototype for this would be:
------ code1.h --------
extern int foo(void);
-----------------------

But people coding also C++ often think that empty parameter list means void for C like it does for C++. However, GCC (unlike LLVM, MVC etc) assume such declaration is a K&R function instead of a prototype and disables all argument checking for that function, even when one specifically requests pedantic c99 standard checks.

Typically giving extra arguments isn't a problem, but the standard says that without a prototype that's undefined behavior and therefore compiler is in rights to generate in that situation "nasal daemons"...?

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 19:46 UTC (Sun) by vonbrand (guest, #4458) [Link]

Functions with and without paramenters can very well be implemented differenty, so this could very well bite you hard one day.

Yup...

Posted May 23, 2011 5:09 UTC (Mon) by khim (subscriber, #9252) [Link]

Typically giving extra arguments isn't a problem, but the standard says that without a prototype that's undefined behavior and therefore compiler is in rights to generate in that situation "nasal daemons"...?

Yup. That's correct. If your platform uses any form of Callee clean-up by default then your program indeed will blow up (typically it'll generate SIGSEGV but it's not guranteed).

Program is still valid if bar() is never called: undefined behavior only makes program invalid when it's actually executed.

This is true for any other cases as well. Compiler starts with the set of undefined behaviors and a promise from the programmer: this is program in C (or C++) and that means it does not trigger undefined behavior. How exactly programmer avoids undefined behavior is not important: perhaps there are some math or even command line options. It's not important. The important fact is that program does avoid the undefined behaviors and so all code paths which trigger it can be safely reduced/removed: they can not ever be executed so it's safe. Good compilers have few passes which are dedicated to propagate undefined behavior "back" (if access to some variable always trigger undefined behavior that it's never accessed so there are no need to even calculate it and that means some function calls can often be removed, etc).

Real world experience

Posted May 22, 2011 10:19 UTC (Sun) by tialaramex (subscriber, #21167) [Link] (1 responses)

On Thursday I was confronted with a pile of HTML diagnostic output from Clang (specifically checker-256 according to the metadata) for some code we GPL'd a while back.

Disappointingly so far as I can tell the output fits roughly into two categories:

1. I knew that and I don't care. e.g. Telling me that there are some trivial dead stores in code that we wrote to be transparent rather than fast. Not Clang's fault it can't read my mind, but it's still useless.

2. Confusing to the point of probably wrong. e.g. we have some code which does roughly:

for (int k = 0; k < length; ++k) { /* initialise */ }
for (int k = 0; k < length; ++k) { /* do something */ }
for (int k = 0; k < length; ++k) { /* wrap up */ }

Clang's analysis appears to claim that if the first loop exists immediately, but the second loop gets run, we access uninitialised bytes. Well, OK. But, how can one of these identical loops with constant length run, and not the others? Clang apparently thinks this part of the analysis is too obvious to bore a human with it. So we have no idea, all we know is Valgrind never sees these imaginary garbage reads in running code, which tends to make us suspect an analysis bug. It would be nice if the diagnostic was clear enough to say for sure.

Really the only semi-useful output was some code which Clang noticed is confused about whether (p == NULL) is possible, testing for it in one place and then not testing elsewhere. We fixed that, but that's one useful report compared to dozens that were a waste of time to read.

I recall reading a previous LWN article in which it was claimed that THIS is what makes static analysis hard. Figuring out what you should report in order that reading your diagnostics is a good use of the programmer's time. Right now Clang's offering isn't there. Sorry.

Real world experience

Posted May 22, 2011 18:43 UTC (Sun) by cbcbcb (subscriber, #10350) [Link]

for (int k = 0; k < length; ++k) { /* initialise */ }
for (int k = 0; k < length; ++k) { /* do something */ }
for (int k = 0; k < length; ++k) { /* wrap up */ }

Clang's analysis appears to claim that if the first loop exists immediately, but the second loop gets run, we access uninitialised bytes. Well, OK. But, how can one of these identical loops with constant length run, and not the others?
If an operation in /* initialise */ performs a store which could alias with length then (as far as the analysis is concerned) the 2nd loop may run more iterations than the first.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 22, 2011 21:46 UTC (Sun) by alvieboy (guest, #51617) [Link] (62 responses)

I know I might look a bit naïve by saying this, but:

shouldn't we, "C", "C++" programmers, be aware of this ? I mean, delegating some optimizations on our compilers is a good thing, but we definitely *must* be aware of our mistakes [and knowing our mistakes is the most important part]. There's no point of having a very "smart" compiler, when the programmer is slightly "dumb".

I am never sure how my compilers behave, in many situations I do inspect the generated low-level assembly code, and so far I don't recall my compilers to issue bad code, unless I wrote bad code myself. They do not generate optimal code, that is for sure, but if I am explicit about what I want, they always follow my intent.

Undefined behaviour is not a compiler issue, it's a programmer issue. AFAIK some programming languages, like ADA [I have no experience in ADA whatsoever], and VHDL [I have a lot exp. in VHDL] allow you to place explicit compile-time (and run-time) constraints on your data types. Of course, if you don't have these (like in "C", but you surely can have them in "C++") you need to be extra careful.

But again, this boils down to what the programmer writes down. The programmer rules. The programmer writes bugs. The programmer must think three times before writing code (even in ADA and VHDL).

Think before you type. Think again after you type - and remember you're not perfect. Nor your compiler. But you are definitely more important. And definitely smarter.

Alvie

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 0:57 UTC (Mon) by elanthis (guest, #6227) [Link] (6 responses)

That is the point of these articles. Raising awareness. Hell, I've worked on compilers before and learned a few things from these articles. C/C++ are not trivial languages to fully understand (C++ especially so, at almost 1200 pages for the upcoming C++0x specification).

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 2:38 UTC (Mon) by dgm (subscriber, #49227) [Link] (5 responses)

One of the problems is that, at 1200 pages of concise specification, the language can be said to be too complicated. No body except a compiler expert can be expected to master such a complex beast. The rest of mortals will know and use a few percent of all that, but the language will be full of sharp edges and pointy corners, waiting until you find them by accident, ready to burn a week worth of your time chasing some weird behavior.

Maybe it's time to try to reduce and simplify C++ instead of adding more and more features. Wouldn't it be great if a concise and regular dialect of C++ could be specified in 120 pages, instead of 1200?

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 3:25 UTC (Mon) by neilbrown (subscriber, #359) [Link] (2 responses)

> Maybe it's time to try to reduce and simplify C++ instead of adding more and more features.

As the same time that Sun was working on Oak (which became Java), there was a team working on "Clarity" which was meant to be a simpler C++... I wonder what became of that (i.e. I cannot be bothered hunting to find an answer, I want someone to tell me :-)

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 24, 2011 17:23 UTC (Tue) by cmccabe (guest, #60281) [Link] (1 responses)

It would be interesting if C++ had a "use strict" mode similar to Perl. Something that would try to eliminate the worst abuses. The problem is that there really is a lot less consensus on what "the worst abuses" are than in Perl. I have my own list, but I'm sure that posting it here would generate a flamewar (and probably lead to me educating a lot of people about obscure C++ trivia).

I know that you realize this (and clearly all the kernel developers do too), but a lot of developers don't realize that you can use C exactly like a "simpler C++." Instead of classes, you have functions which all take a pointer to a struct. Instead of private methods, you have functions which are static to a file. Instead of virtual functions, you have tables of function pointers.

I would also encourage anyone who is tired of C++ to check out Google Go. At least give it a try.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted Jun 5, 2011 11:07 UTC (Sun) by JanC_ (guest, #34940) [Link]

I'm sure somebody can write a 2400 page book to define a safe subset of C++ ;-)

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 7:27 UTC (Mon) by zorro (subscriber, #45643) [Link]

You have to ask yourself how many pages of those 1200 pages describe the core language. In the current standard, about 300 our of 750 pages is language specification and even these contain plenty of examples and notes.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 9:51 UTC (Mon) by marcH (subscriber, #57642) [Link]

> Maybe it's time to try to reduce and simplify C++ instead of adding more and more features.

It never works that way. Backward compatibility is just too valuable, even more valuable that the burned weeks you just mentioned.

What happens is that new, safer languages gradually take the place of C/C++ everywhere performance is not as critical. And this is a Good Thing. C++ has been far too successful, way beyond the space it's the best choice.

You're tired of the kids repeatedly crashing the Formula 1 when going to the supermarket? Just buy them a minivan. You will have more time to focus on the next race.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 6:54 UTC (Mon) by iabervon (subscriber, #722) [Link] (54 responses)

The usual problem is that programmers assume that, while something is undefined, it is within some limited range. For example, people assume that dereferencing a pointer leads to either a trap or some arbitrary value; they are surprised if it leads to some later well-defined code not behaving as defined.

For that matter, the OpenSSL developers assume that uninitialized variables contain unspecified values of their types (rather than accessing them causing undefined behavior). If a compiler were to use a special "undef" value such that any operation that used it produced "undef", and constant-propagated this through the random number generator, it would find that the random number generator always produces "undef", and could strip it out entirely.

There's a lot of cases where programmers imagine what undefined behavior a compiler could find useful to pick for their code, and the programmers are fine with any choice they can imagine. But their imaginations are not nearly good enough.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 10:32 UTC (Mon) by cesarb (subscriber, #6266) [Link] (2 responses)

> [...] developers assume that uninitialized variables contain unspecified values of their types (rather than accessing them causing undefined behavior). If a compiler were to use a special "undef" value [...]

There is at least one processor architecture where accessing uninitialized variables can crash your program. It is not the compiler in this case, it is the processor itself which has a special "undefined" value.

See this article: "Uninitialized garbage on ia64 can be deadly" on Raymond Chen's blog (http://blogs.msdn.com/b/oldnewthing/archive/2004/01/19/60...).

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 19:45 UTC (Mon) by kripkenstein (guest, #43281) [Link]

> There is at least one processor architecture [ia64] where accessing uninitialized variables can crash your program. It is not the compiler in this case, it is the processor itself which has a special "undefined" value.

Somewhat similar: If you consider JavaScript an 'architecture', then when compiling C or C++ to JS (using Emscripten), reading uninitialized values gives you a JS 'undefined' value, which isn't a number (since you are doing something like x = [1,2,3]; y = x[100]. So y will be undefined).

This doesn't crash, but it can lead to very unpredictable behavior. For example, reading such a value into x, then x+0 != x (since undefined+0 = NaN, and NaN != undefined).

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 25, 2011 14:29 UTC (Wed) by welinder (guest, #4699) [Link]

The old alpha architecture can do that too, at least for floating-
point data.

What Every C Programmer Should Know About Undefined Behavior #3/3

Posted May 23, 2011 11:22 UTC (Mon) by farnz (subscriber, #17727) [Link] (50 responses)

It's not just a lack of imagination. Developers have a mental model of how a processor works, and how the C language maps to that model. This mental model is often badly outdated; it assumes things like all memory accesses are as cheap as each other, instructions are executed in order, the time cost of each machine instruction is fixed, and other things that just aren't true on modern systems. Compilers, meanwhile, know about tricks like speculative loads, out-of-order execution, cache prefetching, branch prediction, and all sorts of things that just aren't in a typical developer's mental model of how things work.

Basically, if your model of how CPUs work stalls at the 8086 (not uncommon - despite knowing how modern CPUs work, I'm still prone to thinking in 8086-style terms), you're going to struggle to think of the possible ways a modern CPU could execute the code as given. Throw in compiler cleverness as discussed in these articles, and you're drowning - too many of the optimisations in the compiler and in the hardware actively break your mental model of what happens when your code is run.

It's different viewpoints...

Posted May 24, 2011 7:00 UTC (Tue) by khim (subscriber, #9252) [Link] (49 responses)

Actually it's much simpler. Programmers usually assume that "undefined behavior" is some kind of "unknown behavior" and try to guess what's the worst case can be. Then they decide if they care or not.

Compiler, on the other hand, assume they work with C and/or C++ program. And proper C and/or C++ program never trigger undefined behavior. This means every time undefined behavior is detected by the compiler it can be assumed this part is never actually executed with arguments which can trigger undefined behavior - this makes it possible to build a lot of interesting theories which help optimization.

Note that ultimate conversion of all SSL functions to "return undef;" is valid optimization and it does not depend on CPU model at all.

It's different viewpoints...

Posted May 24, 2011 8:05 UTC (Tue) by farnz (subscriber, #17727) [Link] (26 responses)

But why do developers not realise that this is a problem? They're aware that triggering real machine undefined behaviour is a bad idea (e.g. poking random memory through a pointer set by casting the result of rand()).

I think the model of C as a "high level assembler" is part of what triggers this misunderstanding in developers; if you think that the C compiler does some trivial optimisations (constant folding and the like), then spits out a program that works on the real hardware in roughly the same way that it would work if you'd hand-assembled the C code yourself, you don't stop and think "hang on a minute, this might be undefined behaviour - the compiler could do anything".

Add to that the generally high quality of implementation of real compilers, such that undefined behaviour rarely bites you in the backside, and it can take a developer a long time to discover that their mental model of a C compiler as a thing that takes their input C code, and spits out machine code that does exactly what they would have done if they'd written the code in assembler for a machine they understand.

It's different viewpoints...

Posted May 24, 2011 8:45 UTC (Tue) by etienne (guest, #25256) [Link] (25 responses)

Well, if I wanted to do a test tool for a library I may get the idea of calling functions with rand() or null pointers to see what happen...
It does bother me that the compiler would produce the message "your library is working perfectly" when in fact a wrong pointer may crash it...
Same for debugging, I may want to display some binary values as unsigned in hexadecimal even if I know that is a float (so a dirty cast), to see if I had a memory overflow and in fact this float contains a string...

Sure, but then you deserve what you get...

Posted May 24, 2011 10:05 UTC (Tue) by khim (subscriber, #9252) [Link] (20 responses)

Same for debugging, I may want to display some binary values as unsigned in hexadecimal even if I know that is a float (so a dirty cast), to see if I had a memory overflow and in fact this float contains a string...

Right, but how often you actually do that? Do you really want to punish the compiler and force it to keep "for loop iterative variable" in memory to make sure it's changed via float if the pointers are mixed just right?

My favorite example is bug 33498. It's this piece of code:

void table_init(int *value)
{
        int i;
        int val = 0x03020100;

        for (i = 0; i < 256/4; i++) {
                value[i] = val;
                val += 0x04040404;
        }
}

What does this piece of code do? Most people answer: it fills the table. Smart people answer: it efficiently fills the table of chars using trick with type conversions. At that point I show what it really does: it destroys your program. Oops? Nasty bug in the compiler? Nope: Status: RESOLVED INVALID.

The problem with undefined behavior today is not that compilers peruse it for optimizations. It's the fact that they do it so carefully. That means that a lot of cases which should trigger undefined behavior don't blow up but silently work - this means people become complacent. Hopefully LTO will help: it'll allow the compiler to weed the undefined behavior branches more aggressively and this will mean people will be punished earlier and learn to look for undefined behavior cases.

Sure, but then you deserve what you get...

Posted May 24, 2011 11:20 UTC (Tue) by farnz (subscriber, #17727) [Link] (16 responses)

But look at comment 10 to that bug; Eric's not a fool, and yet his mental model of the C89 virtual machine tells him that the following code has implementation-defined semantics, not undefined semantics. In particular, he assumes that i will overflow in a defined fashion, although the final value is not predictable:

int i = INT_MAX;
int j;
int *location = some_sane_value;
for( j = 0; j < 100; ++j )
{
    location[j] = i++;
}

So, the question is why does Eric think that way? I would suggest that one reason is that an assembly language equivalent does have well-defined semantics (using an abstract machine that's a bit like ARM):

MOV R0, #INT_MAX
ADR R1, some_sane_value;
MOV R2, 0
.loop:
MOV [R1 + R2 * 4], R0
ADD R0, R0, #1
ADD R2, R2, #1
CMP R2, #100
BLT .loop

In this version of the code, which is roughly what an intuition of "C is a high level assembly language" would compile the source to, ADD R0, R0, #1 has defined overflow semantics; further, exiting the loop depends on the final value of R2, not on the value of R0. The surprise for Eric is twofold:

  1. The compiler has chosen to elide j, and exit the loop when i reaches its final value.
  2. Because i is signed, i's final value is undefined, thus the compiler never exits the loop.

If i was unsigned, and we changed INT_MAX to UINT_MAX, Eric would probably still have been surprised that his loop compiled to something like:

MOV R0, #UINT_MAX
ADR R1, some_sane_value
.loop:
MOV [R1], R0
ADD R0, R0, #1
ADD R1, R1, #4
CMP R0, #UINT_MAX + 100
BNE .loop

Assuming I'm right in thinking that it's the mental model caused by "C is a high level assembly language" that's breaking things, we have an open question: how do we change the way developers think about C such that perfectly correct compilers don't surprise them?

Sure, but then you deserve what you get...

Posted May 24, 2011 15:16 UTC (Tue) by foom (subscriber, #14868) [Link] (15 responses)

I think it's pretty ridiculous that signed overflow is undefined but unsigned overflow is defined. That's an low-level accident of history (some architectures used 2's complement for signed values, some used 1's complement). That this mistake now gets used as a loophole to make programs be optimized better (or optimized into running incorrectly) is a nice trick, but doesn't really seem justifiable if you were going to do it again.

Signed overflow *ought to have been* implementation defined in C from the beginning, not undefined.

Sure, but then you deserve what you get...

Posted May 24, 2011 16:08 UTC (Tue) by marcH (subscriber, #57642) [Link] (13 responses)

> That this mistake now gets used as a loophole to make programs be optimized better (or optimized into running incorrectly) is a nice trick,

I do not think this is a fair description of what gcc does here.

Please correct if I'm wrong: I think what happens here is just that gcc has a small and *unexpected* "overflow accident" while optimizing the end condition of the loop. And that is OK because signed overflows are simply not supported in C.

Of course it would be nice to have a warning; I guess the main article already explained why this is difficult.

Sure, but then you deserve what you get...

Posted May 24, 2011 16:48 UTC (Tue) by farnz (subscriber, #17727) [Link] (12 responses)

Reading the bug report, GCC specifically takes advantage of the undefinedness of signed overflow. We start with the following code (all code in a C-like pseudocode):

int *value;
int i;
int val = 0x03020100;

for (i = 0; i < 256/4; i++) {
    value[i] = val;
    val += 0x04040404;
}

Step 1 of the failure determines that the only time val equals 0x04030200 is when the loop exit condition is true. It thus rewrites the program to look as if the programmer had written:

int *value;
int i;
int val = 0x03020100;

for (i = 0; val != 0x04030200; i++) {
    value[i] = val;
    val += 0x04040404;
}

Next, GCC detects that in the absence of overflow, (val != 0x04030200) is always true. It thus rewrites the program to look as if the programmer had written:

int *value;
int i;
int val = 0x03020100;

for (i = 0; true; i++) {
    value[i] = val;
    val += 0x04040404;
}

This code then translates using the naïve interpretation to the assembler output in the bug report. Note that a finite loop has become infinite; this is a useful optimization because it's not uncommon for real world code to have conditionals that depend solely on constants, such that for this build, the conditional is always true or always false.

Sure, but then you deserve what you get...

Posted May 24, 2011 22:18 UTC (Tue) by marcH (subscriber, #57642) [Link] (11 responses)

I understand that there is a bad looking inconsistency between:
- optimization step 1 computes the result of a signed overflow
- optimization step 2 assumes there is never any signed overflow

I did not like the "loophole" sentence because I (mis?)understood it as: "gcc is punishing everyone who has not read the standard, on purpose".
I mean: the inconsistency between step 1 and step 2 is not evil! It is just an accident that happens to be allowed by the standard and that has benefits in other cases when step 1 and step 2 do not collide that bad.

Sure, but then you deserve what you get...

Posted May 24, 2011 23:14 UTC (Tue) by iabervon (subscriber, #722) [Link] (5 responses)

Actually, I suspect that gcc actually transforms the condition to "val < 0x104030200". How would gcc, a C program, compute the result of a signed overflow without risking crashing? It's more comprehensible as "step 1 assumes that, because there is no signed overflow, the arbitrary-precision value of the expression 0x03020100 + (i*0x04040404) is the value of val; step 2 notices that the condition is always true due to the limited range of the variable."

This also avoids the issue that it would be really hard to determine that "val != 0x04030200" is always true without determining that the code actually hits a signed overflow.

Actually, the first transformed code is probably:

int *value;
int val;

for (val = 0x03020100; val < 0x104030200; val += 0x04040404, value++)
    *value = val;

Eliminating "i" is reasonably likely to be huge on x86 come register allocation, so it's a good optimization if it works. And gcc can assume it works because the programmer can't let signed arithmetic overflow. Of course, at this point, it already doesn't work as expected; the second optimization just makes the assembly confusing. The second optimization is really for "if (size >= 0x100000000) return -EINVAL;" where the programmer cares about a 32-bit limit in code that could be built with either 32-bit or 64-bit ints; in some builds it's important, and in all builds it's correct, but the compiler can eliminate it in cases where it doesn't matter.

Have you seen this page?

Posted May 25, 2011 5:12 UTC (Wed) by khim (subscriber, #9252) [Link] (4 responses)

How would gcc, a C program, compute the result of a signed overflow without risking crashing?

Have you seen this page? Specifically the libraries part? Do you know why GMP, MPFR and MPC are requirements, not options? Actually I think this particular optimization does not use multiprecision arithmetic, but if it's needed in some passes it is available to GCC despite the fact that it's C program.

P.S. Note that is you really want "overflow", not "undefined behavior" you can do that and the fact that GCC is a C program does not stop you.

Have you seen this page?

Posted May 25, 2011 5:44 UTC (Wed) by iabervon (subscriber, #722) [Link] (3 responses)

Those libraries compute the well-defined results of calculations with large numbers; they don't give the result of signed overflow. They aren't going to help for figuring out the results of running (unoptimized, on the target machine):

if (MAXINT + 1 == -MAXINT) printf("Wow, 1's complement!\n");

because that's a matter of processor architecture, not mathematics. And certainly gcc isn't going to try eliciting the undefined behavior itself and replicating it, because the undefined behavior might be "your compiler crashes processing unreachable code", which the C specification doesn't allow.

Have you seen this page?

Posted May 26, 2011 8:27 UTC (Thu) by marcH (subscriber, #57642) [Link] (2 responses)

> Those libraries compute the well-defined results of calculations with large numbers; they don't give the result of signed overflow.

Of course they can do this; the latter is just one modulus operation away from the former.

> because that's a matter of processor architecture, not mathematics.

Surprise: processors architectures are rooted in arithmetics. All of them, even though they differ with each other.

Have you seen this page?

Posted May 27, 2011 9:45 UTC (Fri) by dgm (subscriber, #49227) [Link] (1 responses)

> Surprise: processors architectures are rooted in arithmetics. All of them, even though they differ with each other.

Still not a matter of mathematics: both 1's and 2's compliment are equally correct. The matter is about the choice made by the processor designers, and thus, a matter of processor architecture.

Have you seen this page?

Posted May 27, 2011 14:03 UTC (Fri) by marcH (subscriber, #57642) [Link]

It's a matter of both: 1. Choose the architecture, 2. Apply the maths specific to this architecture.

GMP or else can help for the latter (of course not for the former).

Sure, but then you deserve what you get...

Posted May 25, 2011 1:27 UTC (Wed) by foom (subscriber, #14868) [Link] (4 responses)

Yes, it has optimization benefits. That doesn't mean it's actually a good feature.

Overloading "signedness" with "cannot overflow" makes no sense, it's simply an accident of history.
But back in history, C compilers weren't smart enough to take advantage of the leeway given: they in fact did exhibit implementation-defined behavior, not undefined behavior, in the face of a signed overflow. They acted like the hardware acts upon signed overflow. It's only fairly recently that optimizers have taken advantage of this loophole in the standard that allows them to blow up your program if you have any signed int overflows.

Of course, assuming that an unsigned int cannot overflow would also have optimization benefits! Does it really make sense that a loop gets slower just because you declare the loop counter as an "unsigned int" instead of an "int"?

Sure, but then you deserve what you get...

Posted May 25, 2011 2:48 UTC (Wed) by iabervon (subscriber, #722) [Link]

Actually, if the compiler were forced to consider overflow, it would just have to think a little harder before making the same optimizations. The compiler could determine that 1<<32 / GCD(1<<32, 0x04040404) > 256/64 and thus that the first time i >= 256/64, val == 0x04030200 with unsigned math, and val != 0x04030200 before that. Your loop would have to get slower (or, more likely, take an additional register) if it was going to use the same value in "val" multiple times, simply because it becomes necessary to track another piece of information.

(Also note that it doesn't matter if you declare the loop counter as an "unsigned int"; the nominal loop counter actually gets discarded entirely, in favor of a proxy loop counter, which is what could overflow.)

Sure, but then you deserve what you get...

Posted May 25, 2011 3:31 UTC (Wed) by iabervon (subscriber, #722) [Link]

Actually, if the compiler is forced to consider overflow, it just has to think a little harder before making the same optimizations. The compiler can determine that 1<<32 / GCD(1<<32, 0x04040404) > 256/4 and thus that the first time i >= 256/4, val == 0x04030200 with unsigned math, and val != 0x04030200 before that. Your loop would have to get slower (or, more likely, take an additional register) if it was going to use the same value in "val" multiple times, simply because it becomes necessary to track another piece of information.

(Also note that it doesn't matter if you declare the loop counter as an "unsigned int"; the nominal loop counter actually gets discarded entirely, in favor of a proxy loop counter, which is what could overflow.)

With gcc 4.4.5, replacing "int val" with "unsigned int val" makes it actually generate what you would expect of:

for (val = 0x03020100; val != 0x04030200; val += 0x04040404, value++)
    *value = val;

which avoids the "i = 0", "*value = val" is a simpler instruction than "value[i] = val", and uses one fewer register; but it still actually works. If the constant you're adding is 0x04000000, the optimization doesn't work, and gcc produces the slower code. The code it produces for "unsigned int" is the fastest working code possible, so it's not getting slower in any meaningful way by using "unsigned int" (I mean, it loops faster without testing the end condition, but...).

Sure, but then you deserve what you get...

Posted May 26, 2011 17:21 UTC (Thu) by anton (subscriber, #25547) [Link]

Does it really make sense that a loop gets slower just because you declare the loop counter as an "unsigned int" instead of an "int"?
It gets slower? Doesn't the code with int loop infinitely when compiled with gcc? Great optimization!

Sure, but then you deserve what you get...

Posted May 28, 2011 20:48 UTC (Sat) by BenHutchings (subscriber, #37955) [Link]

C compilers weren't smart enough to take advantage of the leeway given: they in fact did exhibit implementation-defined behavior, not undefined behavior, in the face of a signed overflow. They acted like the hardware acts upon signed overflow.

Which was to crash, in many cases. Signed overflow caused a processor exception, just like division by zero, because the result could not be represented.

Sure, but then you deserve what you get...

Posted May 28, 2011 20:46 UTC (Sat) by BenHutchings (subscriber, #37955) [Link]

Signed overflow *ought to have been* implementation defined in C from the beginning, not undefined.

Signed overflow results in an exception on some processors. So the range of permissible implementation-defined behaviour would have to include: the program aborts. Slightly better than the current situation, but not much.

Unsigned versus signed

Posted May 24, 2011 12:58 UTC (Tue) by cesarb (subscriber, #6266) [Link]

Wow...

That was an impressive one. I did not expect gcc to use *val* for the loop condition.

I think this is yet one more point in favor of a personal rule of "always use unsigned unless you have a good reason to use signed" (that is, use "unsigned" by default when programming). If you followed that rule, all three "int" on this function would become "unsigned int", since there is no good reason to use signed here.

Sure, but then you deserve what you get...

Posted May 25, 2011 10:14 UTC (Wed) by welinder (guest, #4699) [Link]

That piece of code does not invoke undefined behaviour unless
"int" is too small to hold whatever value things sum up to.

This program may also invoke undefined behaviour:

int main (int argc, char **argv) { return 65535+1; }

Sure, but then you deserve what you get...

Posted May 26, 2011 17:05 UTC (Thu) by anton (subscriber, #25547) [Link]

Do you really want to punish the compiler and force it to keep "for loop iterative variable" in memory to make sure it's changed via float if the pointers are mixed just right?
Sure, if I keep an induction variable in a global variable, static variable or an auto variable that I take the address of, I expect that variable to be in memory and to be fetched from there and/or stored there whenever I access it.

What's this nonsense about punishing the compiler? It's a thing without feelings, it cannot be punished. If it's a good compiler, it will do what I expect. Unfortunately, the most popular compilers become worse and worse with every release.

I would welcome the world that you dream of where these compilers miscompile every significant program. Then the pain would be so large that we all (well, you and a few others excepted) would finally scratch our itch and write a compiler that does not do any of that nonsense, and gcc etc. would go the way of XFree86, like they should.

Oh, forgot to say...

Posted May 24, 2011 10:19 UTC (Tue) by khim (subscriber, #9252) [Link] (3 responses)

Same for debugging, I may want to display some binary values as unsigned in hexadecimal even if I know that is a float (so a dirty cast), to see if I had a memory overflow and in fact this float contains a string...

Note that ANSI C standard does not consider this use case important enough to you can not ever convert print float as hex (it's even explained in wikipedia article) without triggering undefined behavior - but GNU C gives you such ability if you'll use union. This is enough for debugging, but please don't leave it in your program afterwards: GCC is not the only compiler in the world, you don't want to see your program broken later.

Oh, forgot to say...

Posted May 24, 2011 18:35 UTC (Tue) by jrn (subscriber, #64214) [Link] (2 responses)

Wouldn't a loop to access the internal representation of a float through a pointer to char produce implementation-defined behavior, rather than nasal demons? On the other hand, reading through a pointer to unsigned int is problematic, of course.

Yes, that's true...

Posted May 25, 2011 5:01 UTC (Wed) by khim (subscriber, #9252) [Link] (1 responses)

Ah, my bad. Sure, but the temptation is very high to use conversion to int because they are of the same size - and this is impossible to do even if you use two transitions like (int *)(char *)pf.

The only way to portably and correctly do that is via memcpy - and with current crop of the compilers it's quite efficient too (both memcpy and pointers will be elided), but this is counter-intuitive if you don't know about undefined behaviors and still think that C is high-level assembler.

Yes, that's true...

Posted May 25, 2011 5:19 UTC (Wed) by jrn (subscriber, #64214) [Link]

Even memcpy is not portable to platforms where sizeof(float) != sizeof(int). :)

A rough and incomplete rule of thumb about aliasing rules is that constructs that would be portable given how alignment and size varies from platform to platform are likely to be permitted.

It's different viewpoints...

Posted May 24, 2011 8:55 UTC (Tue) by marcH (subscriber, #57642) [Link] (1 responses)

> Programmers usually assume that "undefined behavior" is some kind of "unknown behavior" and try to guess what's the worst case can be. Then they decide if they care or not.

The vast majority of developers are not aware of most undefined behaviours in the first place (except of course for those which tend to crash right away). Who knows by heart the "191 different kinds of undefined behaviour" mentioned above?

How do people learn to program? Not by reading language specifications but by from examples, trial and error. Once you had some success using some technique, it is extremely counter-intuitive and difficult to think that the (undefined!) behaviour of this exact same technique actually depends on the direction of the wind. And the day it eventually bites you hard, you are so hurt that you just stop using it and certainly do not make guesses about it.

It's different viewpoints...

Posted May 26, 2011 19:28 UTC (Thu) by nix (subscriber, #2304) [Link]

That depends on the language, or rather on the sort of thing the language is typically used for. Communities built up around safety-critical software development, like Ada's, do tend to read the language standard. (Or so Robert Dewar assures us, and my limited experience with such communities suggests that he is right.)

It's different viewpoints...

Posted May 25, 2011 16:57 UTC (Wed) by jreiser (subscriber, #11027) [Link] (19 responses)

This means every time undefined behavior is detected by the compiler it can be assumed this part is never actually executed with arguments which can trigger undefined behavior

I prefer that the compiler and I work as a team with the common goal of converting ideas into executable instructions with good properties. I want the compiler to tell me when and where it detects undefined behavior, so that I can adjust as appropriate. I want the compiler to tell me when and where its transformations change the class of a loop (body never executes, body executes a bounded non-zero number of times, body executes infinitely many times). I want the compiler to tell me when the written and the transformed exit condition for a loop have no variables in common. I consider it to be a Usability bug that gcc 4.5.1 does not tell me these interesting facts about 33498.c.

It's different viewpoints...

Posted May 26, 2011 12:10 UTC (Thu) by marcH (subscriber, #57642) [Link] (18 responses)

The LLVM blog explains why most of these desired items are difficult or impossible to implement.

I am really surprised by the number of "the compiler is evil" comments here, whereas this series of LLVM articles just tried to demonstrate the opposite by explaining how things work.

> I want the compiler to tell me [loads]

Try writing Java code in Eclipse (I am dead serious).

It's different viewpoints...

Posted May 26, 2011 17:35 UTC (Thu) by anton (subscriber, #25547) [Link] (17 responses)

I am really surprised by the number of "the compiler is evil" comments here, whereas this series of LLVM articles just tried to demonstrate the opposite by explaining how things work.
Maybe that was the intention of the author, but he demonstrated that his compiler actually is evil. A while ago I started writing an advocacy piece against this kind of "optimizing C compilers" (or miscompilers) and for compilers that aim for creating the code that the programmer expects; the interesting thing is that many of the arguments in my article are admitted to in Chris Lattner's series (e.g., there is no way to know where your C program incurs undefined behaviour). I should pick my article up again and finally finish it.

It's different viewpoints...

Posted May 27, 2011 14:26 UTC (Fri) by mpr22 (subscriber, #60784) [Link] (16 responses)

Most optimizing compilers have a "do not optimize" mode. In gcc, "do not optimize" (-O0) is the default setting; you have to explicitly enable the footgun. -O0 still violates the programmer's expectations in C99 and C++98, though, since the "inline" keyword is ineffective, even on the tiniest of functions, when gcc's optimizer is disabled.

It's different viewpoints...

Posted May 29, 2011 8:46 UTC (Sun) by anton (subscriber, #25547) [Link] (15 responses)

Yes, I guess the way that gcc etc. are going, we should recommend -O0 as the option to use if people want their programs to behave as intended.

-O0 certainly violates my performance expectations, because I don't expect all local variables to end up in main memory (I expect that the compiler puts many of them in registers); but that (and inline) is just performance, the compiled program still does what is intended.

Chris Lattner recommends more specific flags for disabling some of the misfeatures of clang, but these are not complete (and even if they are now, tomorrow another version of Clang might introduce another misfeature that is not covered by these flags) and they don't work on all versions of all compilers, so -O0 is probably the best way that we have now to get intended behaviour for our programs. Of course, given the mindset of the gcc developers (and obviously also the clang developers), there is no guarantee that -O0 will continue to produce the intended behaviour in the future.

I wonder why they put so much effort in "optimization" if the recommendation is to disable some or all of these "optimizations" in order to get the program working as intended. In my experience gcc-2.x did not have this problem. There I could compile my programs with -O (or even -O2) and the programs still worked as intended without any further ado (and they performed much better than gcc-4.x -O0). Too bad there is no gcc-2.x for AMD64 or I would just forget about gcc-4.x.

It's different viewpoints...

Posted May 29, 2011 9:51 UTC (Sun) by nix (subscriber, #2304) [Link] (10 responses)

I wonder why they put so much effort in "optimization" if the recommendation is to disable some or all of these "optimizations" in order to get the program working as intended.
Because one of the ways people intend their programs to work is 'fast', and because not all code is the sort of riceboy rocket science that gets broken by these optimizations? I've personally written code that fell foul of aliasing optimizations precisely twice, and every time I knew I was doing something dirty when I did it.

Come on! Who writes *(foo *)&thing_of_type_bar and doesn't think 'that is ugly and risky, there must be a better way'?

It's different viewpoints...

Posted May 30, 2011 17:46 UTC (Mon) by anton (subscriber, #25547) [Link] (9 responses)

Who writes *(foo *)&thing_of_type_bar and doesn't think 'that is ugly and risky, there must be a better way'?
I write such code. It's ugly, true. It did not use to be risky until some compiler writers made it so; on the contrary, it worked as I expect it to on all targets suppored by gcc-2.x (and the example of mpr22 about code that's not 64-bit clean is a red herring; that's a portability bug that does not work with -O0 or gcc-2.x -O, either, so it has nothing to do with the present discussion).

But code like this one of the reasons for using C rather than, say, Java. C is supposed to be a low-level language, or a portable assembler; even Chris Lattner claims that "the designers of C wanted it to be an extremely efficient low-level programming language". This is one of the things I want to do when I choose a low-level language.

One of my students implemented Postscript in C#; it was interesting to see what was not possible (or practical) in that language and how inefficient the workarounds were. If we were to write in the common subset of C and C#/Java, as the defenders of misbehaviour in gcc and clang suggest, we would be similarly inefficient, and the "optimizations" that these language restrictions enable won't be able to make up for that inefficiency by far.

Maybe your programs are not affected in this way, unlike some of my programs, but then you don't need a low-level language and could just as well use a higher-level language.

Sorry, but this is just wrong...

Posted May 30, 2011 18:47 UTC (Mon) by khim (subscriber, #9252) [Link] (8 responses)

It did not use to be risky until some compiler writers made it so; on the contrary, it worked as I expect it to on all targets suppored by gcc-2.x

Sorry, but this is just not true. Lots of platforms it was flaky because FPU was physically separate. Most of them were embedded but it was the problem for 80386CPU+80287FPU (yes, it's legal and yes, such plaforms were actually produced), for example.Sure, some platforms were perfectly happy with such code. But then if you want low-level non-portable language... asm is always there.

Maybe your programs are not affected in this way, unlike some of my programs, but then you don't need a low-level language and could just as well use a higher-level language.

Or, alternatively, you can actually read specifications and see what the language actually supports. Most (but not all) "crazy behaviors" of gcc and clang just faithfully emulate hardware portability problems, nothing more, nothing less. It's kind of funny, but real low-level stuff (like OS kernels or portable on-bare-metal programs) usually survive "evil compilers" just fine. It's code from "programmer cowboys" who know how the 8086 works and ignore everything else which is problematic.

C as portable assembler

Posted May 30, 2011 18:56 UTC (Mon) by jrn (subscriber, #64214) [Link] (5 responses)

> Sure, some platforms were perfectly happy with such code. But then if you want low-level non-portable language... asm is always there.

And so is C. :) After all, what language is the Linux kernel written in?

> Or, alternatively, you can actually read specifications and see what the language actually supports.

I don't think the case of signed overflow is one of trial and error versus reading specifications. It seems more like one of folk knowledge versus new optimizations --- old gcc on x86 and many similar platforms would use instructions that wrap around for signed overflow, so when compiling old code that targeted such platforms, it seems wise to use -fwrapv, and when writing new code it seems wise to add assertions to document why you do not expect overflow to occur.

Of course, reading the spec can be a pleasant experience independently from that.

C as portable assembler

Posted May 31, 2011 7:17 UTC (Tue) by khim (subscriber, #9252) [Link] (2 responses)

> Sure, some platforms were perfectly happy with such code. But then if you want low-level non-portable language... asm is always there.

And so is C. :) After all, what language is the Linux kernel written in?

Linux kernel is written in C, quite portable and people fight constantly to fix hardware and software compatibility problems. Note that while GCC improvements are source of a few errors they are dwarfed by number of hardware compatibility errors. Most of the compiler problems happen when people forget to use appropriate constructs defined to make hardware happy: by some reason macroconstructs designed to fight hardware make code sidestep a wide range of undefined C behaviors. Think about it.

I don't think the case of signed overflow is one of trial and error versus reading specifications.

It is, as was explained before. There are other similar cases. For example standard gives you ability to convert pointer to int in some cases, but even then you can not convert int to pointer because on some platforms pointer is not just a number - yet people who don't know better often do that. Will you object if gcc and/or clang will start to miscompile such programs tomorrow?

It seems more like one of folk knowledge versus new optimizations --- old gcc on x86 and many similar platforms would use instructions that wrap around for signed overflow, so when compiling old code that targeted such platforms, it seems wise to use -fwrapv, and when writing new code it seems wise to add assertions to document why you do not expect overflow to occur.

Note that all these new optimizations are perfectly valid for the portable code. Surprisingly enough -fwrapv exist not to make broken programs valid but to make sure Java overflow semantic is implementable in GCC. Sure, you can use it in C, but it does not mean your code is suddenly correct after that.

Of course, reading the spec can be a pleasant experience independently from that.

Actually it's kind of sad that the only guide we have here is the standard... Given how often undefined behavior bites us you'd think we'll have books which explain where and how they can be triggered in "normal" code. Why people accept that i = i++ + ++i; is unsafe and unpredictable code but lots of other cases which trigger undefined behavior are perceived as safe? It's matter of education...

C as portable assembler

Posted May 31, 2011 17:56 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

Why people accept that i = i++ + ++i; is unsafe and unpredictable code
  1. Who would write "i = i++ + ++i;" anyway?
  2. It is easy to write what you intended here (whatever that was) in a way that's similarly short and fast and generates a similar amount of code.
but lots of other cases which trigger undefined behavior are perceived as safe?
Because they were safe, until the gcc maintainers decided to break them (and the LLVM maintainers follow them like lemmings).

That's the point...

Posted Jun 1, 2011 9:16 UTC (Wed) by khim (subscriber, #9252) [Link]

It is easy to write what you intended here (whatever that was) in a way that's similarly short and fast and generates a similar amount of code.

It's easy to do in other cases, too. You can always use memcpy to copy from float to int. GCC will eliminate memcpy and unneeded variables.

$ cat test.c
#include <string.h>

int convert_float_to_int(float f) {
  int i;
  memcpy(&i, &f, sizeof(float));
  return i;
}
$ gcc -O2 -S test.c
$ cat test.s
        .file "test.c"
        .text
        .p2align 4,,15
.globl convert_float_to_int
        .type convert_float_to_int, @function
convert_float_to_int:
.LFB22:
        .cfi_startproc
        movss %xmm0, -4(%rsp)
        movl -4(%rsp), %eax
        ret
        .cfi_endproc
.LFE22:
        .size convert_float_to_int, .-convert_float_to_int
        .ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
        .section    
    .note.GNU-stack,"",@progbits

Because they were safe, until the gcc maintainers decided to break them (and the LLVM maintainers follow them like lemmings).

They were never completely safe albeit cases where they break were rare. Today it happens more often. This is not the end of the world, but this is what you must know and accept.

C as portable assembler

Posted May 31, 2011 18:05 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

The funny thing is that new gcc (at least up to 4.4) still generates code for signed addition that wraps around instead of code that traps on overflow. It's as if the aim of the gcc maintainers was to be least helpful to everyone: for the low-level coders, miscompile their code silently; for the specification pedants, avoid giving them ways to detect when they have violated the spec.

C as portable assembler

Posted May 31, 2011 21:00 UTC (Tue) by jrn (subscriber, #64214) [Link]

There is -fwrapv and -ftrapv. If you think one of those should be the default, no doubt there are some other optimization tweaks (-fno-strict-aliasing?) that you also like; so I encourage you to work on an -Oanton switch in either a wrapper for gcc or gcc itself, for the sake of sharing.

I am pretty happy with -O2 for my own needs, since it generates fast code for loops, but I understand that different situations may involve different requirements and would be happy to live in a world with more people helping make the gcc UI more intuitive.

Sorry, but this is just wrong...

Posted May 31, 2011 17:39 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

Lots of platforms it was flaky because FPU was physically separate.
My code ran fine on systems with physically separate FPU (e.g., MIPS R2000+R2010). Also, why should the separate FPU affect the types foo and bar?

Anyway, if there is a hardware issue that we have to deal with, that's fine with me, and I will deal with it. But a compiler that miscompiles on hardware that's perfectly capable of doing what I intend (as evidenced by the fact that gcc-2.x -O and gcc-4.x -O0 achieve what I intend) is a totally different issue.

But then if you want low-level non-portable language... asm is always there.
I want a relatively portable low-level language, and gcc-2.x -O and (for now) gcc-4.x -O0 provide that, and my code ports nicely to all the hardware I can get my hands on (and to some more that I cannot); that's definitely not the case for asm, and it's not quite the case with gcc-4.x -O. For now we work around the breakage of gcc-4.x, but the resulting code is not as fast and small as it could be; and we did not have to endure such pain with gcc-2.x.

And new gcc releases are the biggest source of problems for my code. New hardware is much easier.

Or, alternatively, you can actually read specifications and see what the language actually supports.

The C standard specification is very weak, and has more holes than content (was it 190 undefined behaviours? Plus implementation-defined behaviours). Supposedly the holes are there to support some exotic platforms (such as ones-complement machines where signed addition traps on overflow). Sure, people who want to port to such platforms will have to avoid some low-level-coding practices, and will have to suffer the pain that you want to inflict on all of us.

But most of us and our code will never encounter such platforms (these are deservedly niche platforms, and many of these niches become smaller and smaller over time) and we can make our code faster and smaller with these practices, at least if we have a language that supports them. The language implemented by gcc-2.x does support these practices. We just need a compiler for this language that targets modern hardware.

So, the ANSI C specification and the language it specifies is pretty useless, because of these holes. Even Chris Lattner admits that "There is No Reliable Way to Determine if a Large Codebase Contains Undefined Behavior". So what you suggest is just impractical. Not just would it mean giving up on low-level code, the compiler could still decide to format the hard disk if it likes to, because most likely the code still contains some undefined behaviour.

It's kind of funny, but real low-level stuff (like OS kernels or portable on-bare-metal programs) usually survive "evil compilers" just fine.
I have seen enough complaints from kernel developers about breakage from new gcc versions, and that despite that fact that Linux is probably one of the few programs (apart from SPEC CPU) that the gcc maintainers care for. The kernel developers do what we do, they try to work around the gcc breakage, but I doubt that that's the situation they wish for.

Well, if you want some different language you can create it...

Posted Jun 1, 2011 9:14 UTC (Wed) by khim (subscriber, #9252) [Link]

Also, why should the separate FPU affect the types foo and bar?

If FPU module is weakly tied to CPU module then you must explicitly synchronize them. Functions like mamcpy did that so they were safe, regular operations didn't. That's why standard only supports one way to do what you want and it's memcpy. Which is eliminated completely by modern compilers like or clang where it's not needed.

My code ran fine on systems with physically separate FPU (e.g., MIPS R2000+R2010).

Well, sure. Not all combinations were flaky. It does not change the fact that the only way to convert float to int safely was, is and will be memcpy (till the next redaction of C standard, at least).

But a compiler that miscompiles on hardware that's perfectly capable of doing what I intend (as evidenced by the fact that gcc-2.x -O and gcc-4.x -O0 achieve what I intend) is a totally different issue.

Why? Compiler just makes your hardware less predictable - but only till the boundaries outlined in the standard. That's what the compiler is supposed to do! These boundaries were chosen to support wide range of architectures and optimizations, but if you want different boundaries - feel free to create new language.

For now we work around the breakage of gcc-4.x, but the resulting code is not as fast and small as it could be; and we did not have to endure such pain with gcc-2.x.

This is possible but it just shows you need some different capabilities from your language. You can propose some extensions and/or optimizations to gcc and clang developers to make your code faster. Complains that your code does not work when it clearly violates the spec will lead you nowhere.

GCC and CLANG developers are not stuck up snobs, they are ready to add new extensions when it's needed (this changes the language spec and makes some previously undefined constructs defined), but they clearly are reluctant to try to guess what your code is doing without your help - if they have no clear guidance they use the spec.

The language implemented by gcc-2.x does support these practices.

Ok, if you like it, then use it.

We just need a compiler for this language that targets modern hardware.

No problem, it's there. It may look like a stupid excuse, but it's not. The only way to support something which exist only as implementation on newer platform is emulation. That's why there are all these PDP-11 emulators, NES emulators, etc.

If you want something different then first you must document your assumptions which don't adhere to the C99 spec and then talk with compiler developers.

So, the ANSI C specification and the language it specifies is pretty useless, because of these holes.

Huh? Where THIS comes from? Compiler does not know what the program actually does, but surely the programmer which wrote it does! S/he can avoid undefined behaviors - even if it's not always simple and/or easy. If the programmer likes to play Russian Roulette with the language - it's his/her choice, but then he should expect to be blown to bits from time to time.

Not just would it mean giving up on low-level code, the compiler could still decide to format the hard disk if it likes to, because most likely the code still contains some undefined behaviour.

Compiler can only do that if you trigger undefined behavior. Most undefined behaviors are pretty easy to spot and avoid but some of them are not. Instead of whining here and asking for the pie in the sky which will never materialize you can offer changes to the specifications which will simplify life of the programmer. Then they may be adopted either as extensions or as new version of C. It's supposed to be revised every 10 years, you know.

The kernel developers do what we do, they try to work around the gcc breakage, but I doubt that that's the situation they wish for.

Sure, but they use new GCC capabilities when they become available too and they accepted that they come as package deal. Note that kernel no longer support GCC 2.95 at all.

There was one time when kernel developers said what you are saying and refused to support newer versions of GCC but this lead them nowehere. Today linux kernel can be compiler with GCC 4.6 just fine.

It's different viewpoints...

Posted May 29, 2011 10:24 UTC (Sun) by mpr22 (subscriber, #60784) [Link]

My programs behave as intended when compiled with the GNU Compiler Collection version 4.6 at -O2, and when they don't, it's because I didn't correctly express my intention in the first place.

It may help that over the years, I've had to deal with sizeof(int) doing what the standard says it might (i.e. not be the same on all platforms), and I've been caught out by sizeof(long) doing the same (I wrote a game on i686 round about the time amd64 was launched; the RNG blew up when someone tried to use it on amd64, because I'd written "long" in a place where what I actually meant was "32 bits").

So in the case of the example above (where a bounded loop became infinite because of the way the compiler treated a body of code whose behaviour is formally undefined under the C standard), I'm not actually all that sympathetic. <stdint.h> exists; it behooves the responsible C programmer to use it.

It's different viewpoints...

Posted May 29, 2011 19:27 UTC (Sun) by iabervon (subscriber, #722) [Link] (2 responses)

In the 2.xx days, at least, -O2 meant "use all optimizations that don't change the behavior of any programs, including invalid ones, except totally crazy stuff" (e.g., if you start poking around at your stack frames or use non-volatile pointers to do MMIO, all bets are off). -O3 and higher would give you optimizations that wouldn't work with programs that do things that are technically not permitted. It would be nice if they had an optimization level that would be suitable for what most people think C is, as well as one that tells the compiler that the programmer has carefully avoided any undefined behavior.

It's different viewpoints...

Posted May 30, 2011 1:01 UTC (Mon) by vonbrand (guest, #4458) [Link] (1 responses)

AFAICS, nothing whatsoever has changed then, undefined behaviour is "totally crazy stuff, that nobody in their right mind would expevt to work as intended everywhere"...

It's different viewpoints...

Posted May 30, 2011 1:56 UTC (Mon) by iabervon (subscriber, #722) [Link]

In the 2.xx days, the "totally crazy" stuff was what actual C programmers, who only had 3rd-hand knowledge of the spec, knew couldn't be defined. Pretty much all of the available processors used 2's complement, and everyone assumed that signed overflow used 2's complement and certainly produced some value or other. You couldn't be sure what you'd get from an uninitialized variable, but it would produce some value (and would continue to have that value until you wrote to it). On the other hand, people had no idea what the function call ABI was, or how the stack frame would be laid out, so they couldn't guess what would happen with undefined behavior there. It's gone from "it's hard to get it wrong" (you needed to know a lot about your platform to write code that breaks going from -O0 to -O3) to "it's hard to get it right" (you need to know a lot about the C language to avoid writing code that breaks going from -O0 to -O2).


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds