mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-11-11 23:05:31 +00:00
79590b8edf
%shr = lshr i64 %key, 3 %0 = load i64* %val, align 8 %sub = add i64 %0, -1 %and = and i64 %sub, %shr ret i64 %and to: %shr = lshr i64 %key, 3 %0 = load i64* %val, align 8 %sub = add i64 %0, 2305843009213693951 %and = and i64 %sub, %shr ret i64 %and The demanded bit optimization is actually a pessimization because add -1 would be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization to check for negated constant to make sure it is actually reducing the width of the constant. rdar://11793464 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@160101 91177308-0d34-0410-b5e6-96231b3b80d8 |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
InstCombine.h | ||
InstCombineAddSub.cpp | ||
InstCombineAndOrXor.cpp | ||
InstCombineCalls.cpp | ||
InstCombineCasts.cpp | ||
InstCombineCompares.cpp | ||
InstCombineLoadStoreAlloca.cpp | ||
InstCombineMulDivRem.cpp | ||
InstCombinePHI.cpp | ||
InstCombineSelect.cpp | ||
InstCombineShifts.cpp | ||
InstCombineSimplifyDemanded.cpp | ||
InstCombineVectorOps.cpp | ||
InstCombineWorklist.h | ||
InstructionCombining.cpp | ||
LLVMBuild.txt | ||
Makefile |