Talk:Strength reduction

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Integer reciprocal multiplication?[edit]

Removed "replacing integer division by a constant with reciprocal multiplication". Since when is dividing by 3 faster than converting to floating-point, multiplying by 0.33..., converting back, and worrying about rounding modes? It also isn't guaranteed to work in IEEE floating-point for e.g. 3*0.33...; I suspect it's guaranteed for 2*0.5, but I wouldn't be surprised if the power-of-2 case was handled specially. ⇌Elektron 23:56, 3 December 2008 (UTC)[reply]

There isn't any conversion to floating point: integer division by a constant is replaced by integer multiplication by a constant, taking advantage of the fact that machine integers are not genuine integers but instead form a finite field. Division by Invariant Integers using Multiplication — Preceding unsigned comment added by 125.239.231.49 (talk) 12:24, 15 March 2012 (UTC)[reply]

Overall comment[edit]

Back when an integer multiplication took a long time (e.g., 16 cycles of the multiply-step operation on an IBM ROMP circa 1985), the kind of single-operation strength reduction cited in the article (converting "x*2" into "x+x") made sense. With faster implementations of integer multiplication, it is no longer clear that this example makes sense as a speed improvement. (The compiler might still apply the transformation because of an energy saving, or to create a longer chain of additions that can be reorganized with the associative and commutative laws -- either through algebraic reassociation or through tree height reduction, which are related techniques.)

Today, the primary reason for a compiler to perform strength reduction -- particularly the reduction of repetitive computations based on an induction variable -- is to improve the compiler's ability to reduce the overhead of address arithmetic inside some performance critical loop. Careful application of strength reduction can convert a loop full of address expressions into a couple of register-based induction variables and a lot of loads in a register + immediate address mode. Of course, done poorly, the compiler can end up with a distinct induction variable for each address expression, which leads to a slew of increment operations and a slew of values that each need to be in a register. [Hence, the need for algebraic reassociation ...]

Where am I going? I think that the article might be improved with a major rewrite that explained the "recursive" or "iterative" strength reduction in more detail and described its broader impact on the code. In its current form, the article leads the reader to believe that the minor local win of replacing "x*2" with "x+x" is the primary benefit of this much-vaunted technique.

With a little more discussion of the harder form of strength reduction, I would add citations to the classic articles by Allen, Cocke, & Kennedy and by Dhamdhere et al. as reflective of the two major schools of thought. Our strength reduction algorithm [Cooper, Simpson, & Vick, ACM TOPLAS, Sept 2001] relies on SSA form to simplify the entire process, but it doesn't change the basic concepts from the earlier work by Allen, Cocke, and Kennedy.

Any reactions? Kdcooper (talk) 23:39, 4 January 2009 (UTC)[reply]

I agree with these comments. Glrx (talk) 03:31, 22 April 2010 (UTC)[reply]

Peephole Optimizations[edit]

I think this article confuses peephole optimization and instruction assignment with strength reduction. Doing small, special case, tricks such as replacing 4*x with a shift may help, but it doesn't strike me as "strength reduction". The code that needs to be optimized is the code that executed a lot, and that code usually occurs inside a loop. The classic compiler optimizations focused on cleaning up loops -- hoisting calculations, eliminating extra variables, unrolling, and reducing strength.Glrx (talk) 03:31, 22 April 2010 (UTC)[reply]

I disagree; the optimizations discussed are examples of both peephole optimization and strength reduction. The distinction between optimal instruction assignment and peephole strength reduction optimization is semantic, and doesn't add to the clarity of explanation.--Wcoole (talk) 22:23, 19 October 2015 (UTC)[reply]
Yes, the article describes both, but my complaint is that confuses two ideas.
Semantic distinctions are significant. Peephole operations are local and do not require significant data flow analysis. Replacing a push and a pop with nothing is "reducing strength" in some sense, but all optimizations are trying to make the computation simpler or faster. Strength reduction should not be a moniker for all optimizations.
Strength reduction is an operation that is usually tied to induction variables; it is not looking through a peephole. See, e.g. https://www.cs.utexas.edu/~pingali/CS380C/2013/lectures/strengthReduction.pdf http://crpit.com/confpapers/CRPITV26Nguyen.pdf It's by no means clear; here's a paper that has peephole as a weak form of SR: http://www.cs.rice.edu/~keith/EMBED/OSR.pdf Glrx (talk) 19:26, 20 October 2015 (UTC)[reply]