summary refs log tree commit diff
path: root/fold.c
AgeCommit message (Collapse)Author
2022-01-28implement float -> unsigned castsBor Grošelj Simić
amd64 lacks instruction for this so it has to be implemented with float -> signed casts. The approach is borrowed from llvm.
2022-01-28implement unsigned -> float castsBor Grošelj Simić
amd64 lacks an instruction for this so it has to be implemented with signed -> float casts: - Word casting is done by zero-extending the word to a long and then doing a regular signed cast. - Long casting is done by dividing by two with correct rounding if the highest bit is set and casting that to float, then adding 1 to mantissa with integer addition
2022-01-23Add a negation instructionEyal Sawady
Necessary for floating-point negation, because `%result = sub 0, %operand` doesn't give the correct sign for 0/-0.
2021-11-22reuse previous address constants in fold()Michael Forney
parseref() has code to reuse address constants, but this is not done in other passes such as fold or isel. Introduce a new function newcon() which takes a Con and returns a Ref for that constant, and use this whenever creating address constants. This is necessary to fix folding of address constants when one operand is already folded. For example, in %a =l add $x, 1 %b =l add %a, 2 %c =w loadw %b %a and %b were folded to $x+1 and $x+3 respectively, but then the second add is visited again since it uses %a. This gets folded to $x+3 as well, but as a new distinct constant. This results in %b getting labeled as bottom instead of either constant, disabling the replacement of %b by a constant in subsequent instructions (such as the loadw).
2021-11-10fold: Prevent error when address is used as operandMichael Forney
2021-11-10fold: Don't fold invalid addition/subtraction rather than failingMichael Forney
This may happen in a branch QBE doesn't realize is unreachable, for example (simplified from real code found in ncurses) data $str = { b "abcdef", b 0 } function l $f(w %x) { @start %.1 =w ceqw %x, 0 jnz %.1, @logic_join, @logic_right @logic_right %p =l call $strchr(l $str, w %x) %.2 =w ceql %p, 0 @logic_join %.3 =w phi @start %.1, @logic_right %.2 jnz %.3, @fail, @return @fail ret 0 @return %.4 =l sub %p, $str ret %.4 }
2021-03-02silence a gcc10 warningQuentin Carbonneaux
2020-10-05fold: zero-initialize padding bits of constantsMichael Forney
Otherwise, if a constant is stored as a float and retrieved as an int, the padding bits are uninitialized. This can result in the generation of invalid assembly: Error: suffix or operands invalid for `cvtsi2ss' Reported by Hiltjo Posthuma.
2019-04-29fix folding of unsigned operationsQuentin Carbonneaux
This fixes similar bugs than the ones fixed in the previous commit. In the folding code the invariant is that when a result is 32 bits wide, the low 32 bits of 'x' are correct. The high bits can be anything.
2019-04-29fold: Make sure 32-bit constants get sign extended when necessaryMichael Forney
2019-02-27Let runtime crash on zero div, don't fold it.Andrew Chambers
Remarks from Quentin: It is an important decision to use Bot and not Top as the result of 'x / 0'. By using Bot, we refuse to give a warrant to the compiler that would allow meaningless subsequent decisions. An example follows. Clang, on my computer, will build a program which prints "Ho" when fed the following C: int main() { puts(1/0 ? "Hi" : "Ho"); } On the other hand, a C compiler based on QBE will build a program which crashes, as one would expect. See also https://c9x.me/notes/2014-09-10.html
2018-04-26Fix compiler warnings.Emil Skoeldberg
Compiler warned about comparison between signed and unsigned values.
2017-05-17intern symbol namesQuentin Carbonneaux
Symbols in the source file are still limited in length because the rest of the code assumes that strings always fit in NString bytes. Regardless, there is already a benefit because comparing/copying symbol names does not require using strcmp()/strcpy() anymore.
2017-04-08prepare for multi-targetQuentin Carbonneaux
This big diff does multiple changes to allow the addition of new targets to qbe. The changes are listed below in decreasing order of impact. 1. Add a new Target structure. To add support for a given target, one has to implement all the members of the Target structure. All the source files where changed to use this interface where needed. 2. Single out amd64-specific code. In this commit, the amd64 target T_amd64_sysv is the only target available, it is implemented in the amd64/ directory. All the non-static items in this directory are prefixed with either amd64_ or amd64_sysv (for items that are specific to the System V ABI). 3. Centralize Ops information. There is now a file 'ops.h' that must be used to store all the available operations together with their metadata. The various targets will only select what they need; but it is beneficial that there is only *one* place to change to add a new instruction. One good side effect of this change is that any operation 'xyz' in the IL now as a corresponding 'Oxyz' in the code. 4. Misc fixes. One notable change is that instruction selection now generates generic comparison operations and the lowering to the target's comparisons is done in the emitter. GAS directives for data are the same for many targets, so data emission was extracted in a file 'gas.c'. 5. Modularize the Makefile. The Makefile now has a list of C files that are target-independent (SRC), and one list of C files per target. Each target can also use its own 'all.h' header (for example to define registers).
2017-02-22do not err on address comparisonsQuentin Carbonneaux
While I was at it I also refreshed some bits in the instruction selection.
2017-02-06fix edge deletion bug in sccpQuentin Carbonneaux
When an edge is deleted, the phis and predecessors of the destination block have to be updated. This is what blkdel() was doing, but at a level too coarse. The new function edgedel() allows to remove a single edge (and takes care of multiple edges).
2017-02-06use uint for block idsQuentin Carbonneaux
2017-01-12use a less obtuse api for vnew()Quentin Carbonneaux
2016-10-24fix bug in folding of w comparisonsQuentin Carbonneaux
The casting to uint32_t made the code for comparing two signed words invalid. Interestingly, this can be fixed by casting to int32_t instead. Because sign extension is monotonic, all the unsigned comparisons remain valid. CVC4 can even check that for us: x, y : BITVECTOR(32); QUERY BVLT(SX(x, 64), SX(y, 64)) <=> BVLT(x, y); QUERY BVLE(SX(x, 64), SX(y, 64)) <=> BVLE(x, y); QUERY BVGT(SX(x, 64), SX(y, 64)) <=> BVGT(x, y); QUERY BVGE(SX(x, 64), SX(y, 64)) <=> BVGE(x, y);
2016-08-15specify the allocation function in vnewQuentin Carbonneaux
2016-04-22refine fp conversion instructionsQuentin Carbonneaux
2016-04-20match jumps/ops with il textQuentin Carbonneaux
2016-04-19use assert for ssa invariants in fold/copyQuentin Carbonneaux
2016-04-19rename only live phi arguments in foldQuentin Carbonneaux
AFL found that.
2016-04-17compute dead phi args correctly in foldQuentin Carbonneaux
The code computing if "the" edge of a phi argument is live or dead was wrong. AFL found that.
2016-04-12cosmetic modification in foldQuentin Carbonneaux
2016-04-12simplify latmerge()Quentin Carbonneaux
2016-04-12the lattice merge has to be used in update()Quentin Carbonneaux
2016-04-12oops, dumb bug in foldingQuentin Carbonneaux
2016-04-12diagnose some undefined usesQuentin Carbonneaux
2016-04-09this can be falseQuentin Carbonneaux
I think I should check for Top when rewriting. Because for sure, we don't want to replace some temporary by Top, first, I can't represent it, and second, it'd mean that it is a temporary used undefined. The example that triggered the assertion was like that: @b0 jnz 1, @b1, @b3 @b1 %x =w phi @b0 10, @b2 %x1 jnz %x, @b2, @b3 @b2 %x1 =w sub %x, 1 jmp @b1 @b3 %x2 =w phi @b1 %x, @b0 42 So SCCP optimistically assumes %x is 10 when it goes through @b1, then at the branch, @b1->@b3 is left for dead. After that, @b2 is processed and the flow worklist is empty. Now uses are processed, %x2 will get processed regardless if its block is dead or not (it is at that time), then all the back-edges are dead so the phi is set to the lattice merge of tops only. BOOM! This leads to the question, should phis of dead blocks be processed? I'll check that later.
2016-04-09fix wrong assertion in foldQuentin Carbonneaux
2016-04-09oops, forgot to patch phi argumentsQuentin Carbonneaux
2016-04-09more debug tweaks in foldQuentin Carbonneaux
2016-04-09add a proper block deletion routineQuentin Carbonneaux
2016-04-09nicer debug infoQuentin Carbonneaux
2016-04-09quickly hack fold rewritingQuentin Carbonneaux
2016-04-07use cast in czero()Quentin Carbonneaux
2016-04-07inline latmerge() (cross fingers)Quentin Carbonneaux
2016-04-07adjustments in sccpQuentin Carbonneaux
2016-04-07add boring folding codeQuentin Carbonneaux
2016-04-06start work on constant propagationQuentin Carbonneaux