You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this patch fixes some of problems with cosint in scalar to vector pass.
In particular
1) the pass uses optimize_insn_for_size which is intended to be used by
expanders and splitters and requires the optimization pass to use
set_rtl_profile (bb) for currently processed bb.
This is not done, so we get random stale info about hotness of insn.
2) register allocator move costs are all realtive to integer reg-reg move
which has cost of 2, so it is (except for size tables and i386)
a latency of instruction multiplied by 2.
These costs has been duplicated and are now used in combination with
rtx costs which are all based to COSTS_N_INSNS that multiplies latency
by 4.
Some of vectorizer costing contains COSTS_N_INSNS (move_cost) / 2
to compensate, but some new code does not. This patch adds compensatoin.
Perhaps we should update the cost tables to use COSTS_N_INSNS everywher
but I think we want to first fix inconsistencies. Also the tables will
get optically much longer, since we have many move costs and COSTS_N_INSNS
is a lot of characters.
3) variable m which decides how much to multiply integer variant (to account
that with -m32 all 64bit computations needs 2 instructions) is declared
unsigned which makes the signed computation of instruction gain to be
done in unsigned type and breaks i.e. for division.
4) I added integer_to_sse costs which are currently all duplicationof
sse_to_integer. AMD chips are asymetric and moving one direction is faster
than another. I will chance costs incremnetally once vectorizer part
is fixed up, too.
There are two failures gcc.target/i386/minmax-6.c and gcc.target/i386/minmax-7.c.
Both test stv on hasswell which no longer happens since SSE->INT and INT->SSE moves
are now more expensive.
There is only one instruction to convert:
Computing gain for chain #1...
Instruction gain 8 for 11: {r110:SI=smax(r116:SI,0);clobber flags:CC;}
Instruction conversion gain: 8
Registers conversion cost: 8 <- this is integer_to_sse and sse_to_integer
Total gain: 0
total gain used to be 4 since the patch doubles the conversion costs.
According to agner fog's tables the costs should be 1 cycle which is correct
here.
Final code gnerated is:
vmovd %esi, %xmm0 * latency 1
cmpl %edx, %esi
je .L2
vpxor %xmm1, %xmm1, %xmm1 * latency 1
vpmaxsd %xmm1, %xmm0, %xmm0 * latency 1
vmovd %xmm0, %eax * latency 1
imull %edx, %eax
cltq
movzwl (%rdi,%rax,2), %eax
ret
cmpl %edx, %esi
je .L2
xorl %eax, %eax * latency 1
testl %esi, %esi * latency 1
cmovs %eax, %esi * latency 2
imull %edx, %esi
movslq %esi, %rsi
movzwl (%rdi,%rsi,2), %eax
ret
Instructions with latency info are those really different.
So the uncoverted code has sum of latencies 4 and real latency 3.
Converted code has sum of latencies 4 and real latency 3 (vmod+vpmaxsd+vmov).
So I do not quite see it should be a win.
There is also a bug in costing MIN/MAX
case ABS:
case SMAX:
case SMIN:
case UMAX:
case UMIN:
/* We do not have any conditional move cost, estimate it as a
reg-reg move. Comparisons are costed as adds. */
igain += m * (COSTS_N_INSNS (2) + ix86_cost->add);
/* Integer SSE ops are all costed the same. */
igain -= ix86_cost->sse_op;
break;
Now COSTS_N_INSNS (2) is not quite right since reg-reg move should be 1 or perhaps 0.
For Haswell cmov really is 2 cycles, but I guess we want to have that in cost vectors
like all other instructions.
I am not sure if this is really a win in this case (other minmax testcases seems to make
sense). I have xfailed it for now and will check if that affects specs on LNT testers.
I will proceed with similar fixes on vectorizer cost side. Sadly those introduces
quite some differences in the testuiste (partly triggered by other costing problems,
such as one of scatter/gather)
gcc/ChangeLog:
* config/i386/i386-features.cc
(general_scalar_chain::vector_const_cost): Add BB parameter; handle
size costs; use COSTS_N_INSNS to compute move costs.
(general_scalar_chain::compute_convert_gain): Use optimize_bb_for_size
instead of optimize_insn_for size; use COSTS_N_INSNS to compute move costs;
update calls of general_scalar_chain::vector_const_cost; use
ix86_cost->integer_to_sse.
(timode_immed_const_gain): Add bb parameter; use
optimize_bb_for_size_p.
(timode_scalar_chain::compute_convert_gain): Use optimize_bb_for_size_p.
* config/i386/i386-features.h (class general_scalar_chain): Update
prototype of vector_const_cost.
* config/i386/i386.h (struct processor_costs): Add integer_to_sse.
* config/i386/x86-tune-costs.h (struct processor_costs): Copy
sse_to_integer to integer_to_sse everywhere.
gcc/testsuite/ChangeLog:
* gcc.target/i386/minmax-6.c: xfail test that pmax is used.
* gcc.target/i386/minmax-7.c: xfall test that pmin is used.
0 commit comments