| <!DOCTYPE html> |
| |
| <html> |
| <head> |
| <meta charset="UTF-8"> |
| <link href="style.css" type="text/css" rel="stylesheet"> |
| <title>PHADDW/PHADDD — Packed Horizontal Add </title></head> |
| <body> |
| <h1>PHADDW/PHADDD — Packed Horizontal Add</h1> |
| <table> |
| <tr> |
| <th>Opcode/Instruction</th> |
| <th>Op/En</th> |
| <th>64/32 bit Mode Support</th> |
| <th>CPUID Feature Flag</th> |
| <th>Description</th></tr> |
| <tr> |
| <td> |
| <p>0F 38 01 /r<sup>1</sup></p> |
| <p>PHADDW <em>mm1, mm2/m64</em></p></td> |
| <td>RM</td> |
| <td>V/V</td> |
| <td>SSSE3</td> |
| <td>Add 16-bit integers horizontally, pack to <em>mm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>66 0F 38 01 /r</p> |
| <p>PHADDW <em>xmm1, xmm2/m128</em></p></td> |
| <td>RM</td> |
| <td>V/V</td> |
| <td>SSSE3</td> |
| <td>Add 16-bit integers horizontally, pack to <em>xmm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>0F 38 02 /r</p> |
| <p>PHADDD <em>mm1, mm2/m64</em></p></td> |
| <td>RM</td> |
| <td>V/V</td> |
| <td>SSSE3</td> |
| <td>Add 32-bit integers horizontally, pack to <em>mm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>66 0F 38 02 /r</p> |
| <p>PHADDD <em>xmm1, xmm2/m128</em></p></td> |
| <td>RM</td> |
| <td>V/V</td> |
| <td>SSSE3</td> |
| <td>Add 32-bit integers horizontally, pack to <em>xmm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>VEX.NDS.128.66.0F38.WIG 01 /r</p> |
| <p>VPHADDW <em>xmm1, xmm2, xmm3/m128</em></p></td> |
| <td>RVM</td> |
| <td>V/V</td> |
| <td>AVX</td> |
| <td>Add 16-bit integers horizontally, pack to <em>xmm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>VEX.NDS.128.66.0F38.WIG 02 /r</p> |
| <p>VPHADDD <em>xmm1, xmm2, xmm3/m128</em></p></td> |
| <td>RVM</td> |
| <td>V/V</td> |
| <td>AVX</td> |
| <td>Add 32-bit integers horizontally, pack to <em>xmm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>VEX.NDS.256.66.0F38.WIG 01 /r</p> |
| <p>VPHADDW <em>ymm1, ymm2, ymm3/m256</em></p></td> |
| <td>RVM</td> |
| <td>V/V</td> |
| <td>AVX2</td> |
| <td>Add 16-bit signed integers horizontally, pack to <em>ymm1</em>.</td></tr> |
| <tr> |
| <td> |
| <p>VEX.NDS.256.66.0F38.WIG 02 /r</p> |
| <p>VPHADDD <em>ymm1, ymm2, ymm3/m256</em></p></td> |
| <td>RVM</td> |
| <td>V/V</td> |
| <td>AVX2</td> |
| <td>Add 32-bit signed integers horizontally, pack to <em>ymm1</em>.</td></tr></table> |
| <p>NOTES:</p> |
| <p>1. See note in Section 2.4, “Instruction Exception Specification” in the <em>Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A</em> and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the <em>Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A</em>.</p> |
| <h3>Instruction Operand Encoding</h3> |
| <table> |
| <tr> |
| <td>Op/En</td> |
| <td>Operand 1</td> |
| <td>Operand 2</td> |
| <td>Operand 3</td> |
| <td>Operand 4</td></tr> |
| <tr> |
| <td>RM</td> |
| <td>ModRM:reg (r, w)</td> |
| <td>ModRM:r/m (r)</td> |
| <td>NA</td> |
| <td>NA</td></tr> |
| <tr> |
| <td>RVM</td> |
| <td>ModRM:reg (w)</td> |
| <td>VEX.vvvv (r)</td> |
| <td>ModRM:r/m (r)</td> |
| <td>NA</td></tr></table> |
| <h2>Description</h2> |
| <p>(V)PHADDW adds two adjacent 16-bit signed integers horizontally from the source and destination operands and packs the 16-bit signed results to the destination operand (first operand). (V)PHADDD adds two adjacent 32-bit signed integers horizontally from the source and destination operands and packs the 32-bit signed results to the destination operand (first operand). When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.</p> |
| <p>Note that these instructions can operate on either unsigned or signed (two’s complement notation) integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of the values operated on.</p> |
| <p>Legacy SSE instructions: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.</p> |
| <p>128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand can be an XMM register or a 128-bit memory location. Bits (VLMAX-1:128) of the corresponding YMM destination register remain unchanged.</p> |
| <p>In 64-bit mode, use the REX prefix to access additional registers.</p> |
| <p>VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand can be an XMM register or a 128-bit memory location. Bits (VLMAX-1:128) of the corresponding YMM register are zeroed.</p> |
| <p>VEX.256 encoded version: Horizontal addition of two adjacent data elements of the low 16-bytes of the first and second source operands are packed into the low 16-bytes of the destination operand. Horizontal addition of two adjacent data elements of the high 16-bytes of the first and second source operands are packed into the high 16-bytes of the destination operand. The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.</p> |
| <p>Note: VEX.L must be 0, otherwise the instruction will #UD.</p> |
| <svg width="216.0000075" viewBox="151.200000 543516.540005 144.000005 18.120005" height="27.1800074999"> |
| <rect y="543516.66" x="151.2" style="fill:rgba(0,0,0,0);stroke:rgb(0,0,0);stroke-width:1pt;" height="18.0" width="144.0"></rect> |
| <text y="543528.467584" x="155.3416" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.717246">Y7</text> |
| <text y="543528.467584" x="172.437952" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.781086">Y6</text> |
| <text y="543528.467584" x="191.704864" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.717246">Y5</text> |
| <text y="543528.467584" x="208.7414" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.777096">Y4</text> |
| <text y="543529.067684" x="227.1608" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.777096">Y3</text> |
| <text y="543529.067684" x="244.320992" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.777096">Y2</text> |
| <text y="543529.067684" x="263.524064" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.777096">Y1</text> |
| <text y="543529.067684" x="280.5606" style="font-size:7.980000pt" lengthAdjust="spacingAndGlyphs" textLength="9.777096">Y0</text></svg> |
| <table class="exception-table"> |
| <tr> |
| <td>X7</td> |
| <td>X6</td> |
| <td>X5</td> |
| <td>X4</td> |
| <td>X3</td> |
| <td>X2</td> |
| <td>X1</td> |
| <td>X0</td></tr></table> |
| <p>SRC2</p> |
| <p>SRC1</p> |
| <p>S4</p> |
| <p>S3</p> |
| <p>S0</p> |
| <p>S7</p> |
| <p>S3</p> |
| <p>S3</p> |
| <p>S2</p> |
| <p>S1</p> |
| <p>255</p> |
| <p>0</p> |
| <p>Dest</p> |
| <h3>Figure 4-6. 256-bit VPHADDD Instruction Operation</h3> |
| <h2>Operation</h2> |
| <p><strong>PHADDW (with 64-bit operands)</strong></p> |
| <pre> mm1[15-0] = mm1[31-16] + mm1[15-0]; |
| mm1[31-16] = mm1[63-48] + mm1[47-32]; |
| mm1[47-32] = mm2/m64[31-16] + mm2/m64[15-0]; |
| mm1[63-48] = mm2/m64[63-48] + mm2/m64[47-32];</pre> |
| <p><strong>PHADDW (with 128-bit operands)</strong></p> |
| <pre> xmm1[15-0] = xmm1[31-16] + xmm1[15-0]; |
| xmm1[31-16] = xmm1[63-48] + xmm1[47-32]; |
| xmm1[47-32] = xmm1[95-80] + xmm1[79-64]; |
| xmm1[63-48] = xmm1[127-112] + xmm1[111-96]; |
| xmm1[79-64] = xmm2/m128[31-16] + xmm2/m128[15-0]; |
| xmm1[95-80] = xmm2/m128[63-48] + xmm2/m128[47-32]; |
| xmm1[111-96] = xmm2/m128[95-80] + xmm2/m128[79-64]; |
| xmm1[127-112] = xmm2/m128[127-112] + xmm2/m128[111-96];</pre> |
| <p><strong>VPHADDW (VEX.128 encoded version)</strong></p> |
| <pre>DEST[15:0] ← SRC1[31:16] + SRC1[15:0] |
| DEST[31:16] ← SRC1[63:48] + SRC1[47:32] |
| DEST[47:32] ← SRC1[95:80] + SRC1[79:64] |
| DEST[63:48] ← SRC1[127:112] + SRC1[111:96] |
| DEST[79:64] ← SRC2[31:16] + SRC2[15:0] |
| DEST[95:80] ← SRC2[63:48] + SRC2[47:32] |
| DEST[111:96] ← SRC2[95:80] + SRC2[79:64] |
| DEST[127:112] ← SRC2[127:112] + SRC2[111:96] |
| DEST[VLMAX-1:128] ← 0</pre> |
| <p><strong>VPHADDW (VEX.256 encoded version)</strong></p> |
| <pre>DEST[15:0] ← SRC1[31:16] + SRC1[15:0] |
| DEST[31:16] ← SRC1[63:48] + SRC1[47:32] |
| DEST[47:32] ← SRC1[95:80] + SRC1[79:64] |
| DEST[63:48] ← SRC1[127:112] + SRC1[111:96] |
| DEST[79:64] ← SRC2[31:16] + SRC2[15:0] |
| DEST[95:80] ← SRC2[63:48] + SRC2[47:32] |
| DEST[111:96] ← SRC2[95:80] + SRC2[79:64] |
| DEST[127:112] ← SRC2[127:112] + SRC2[111:96] |
| DEST[143:128] ← SRC1[159:144] + SRC1[143:128] |
| DEST[159:144] ← SRC1[191:176] + SRC1[175:160] |
| DEST[175:160] ← SRC1[223:208] + SRC1[207:192] |
| DEST[191:176] ← SRC1[255:240] + SRC1[239:224] |
| DEST[207:192] ← SRC2[127:112] + SRC2[143:128] |
| DEST[223:208] ← SRC2[159:144] + SRC2[175:160] |
| DEST[239:224] ← SRC2[191:176] + SRC2[207:192] |
| DEST[255:240] ← SRC2[223:208] + SRC2[239:224]</pre> |
| <p><strong>PHADDD (with 64-bit operands)</strong></p> |
| <pre> mm1[31-0] = mm1[63-32] + mm1[31-0]; |
| mm1[63-32] = mm2/m64[63-32] + mm2/m64[31-0];</pre> |
| <p><strong>PHADDD (with 128-bit operands)</strong></p> |
| <pre> xmm1[31-0] = xmm1[63-32] + xmm1[31-0]; |
| xmm1[63-32] = xmm1[127-96] + xmm1[95-64]; |
| xmm1[95-64] = xmm2/m128[63-32] + xmm2/m128[31-0]; |
| xmm1[127-96] = xmm2/m128[127-96] + xmm2/m128[95-64];</pre> |
| <p><strong>VPHADDD (VEX.128 encoded version)</strong></p> |
| <pre>DEST[31-0] ← SRC1[63-32] + SRC1[31-0] |
| DEST[63-32] ← SRC1[127-96] + SRC1[95-64] |
| DEST[95-64] ← SRC2[63-32] + SRC2[31-0] |
| DEST[127-96] ← SRC2[127-96] + SRC2[95-64] |
| DEST[VLMAX-1:128] ← 0</pre> |
| <p><strong>VPHADDD (VEX.256 encoded version)</strong></p> |
| <pre>DEST[31-0] ← SRC1[63-32] + SRC1[31-0] |
| DEST[63-32] ← SRC1[127-96] + SRC1[95-64] |
| DEST[95-64] ← SRC2[63-32] + SRC2[31-0] |
| DEST[127-96] ← SRC2[127-96] + SRC2[95-64] |
| DEST[159-128] ← SRC1[191-160] + SRC1[159-128] |
| DEST[191-160] ← SRC1[255-224] + SRC1[223-192] |
| DEST[223-192] ← SRC2[191-160] + SRC2[159-128] |
| DEST[255-224] ← SRC2[255-224] + SRC2[223-192]</pre> |
| <h2>Intel C/C++ Compiler Intrinsic Equivalents</h2> |
| <p>PHADDW:</p> |
| <p>__m64 _mm_hadd_pi16 (__m64 a, __m64 b)</p> |
| <p>PHADDD:</p> |
| <p>__m64 _mm_hadd_pi32 (__m64 a, __m64 b)</p> |
| <p>(V)PHADDW:</p> |
| <p>__m128i _mm_hadd_epi16 (__m128i a, __m128i b)</p> |
| <p>(V)PHADDD:</p> |
| <p>__m128i _mm_hadd_epi32 (__m128i a, __m128i b)</p> |
| <p>VPHADDW:</p> |
| <p>__m256i _mm256_hadd_epi16 (__m256i a, __m256i b)</p> |
| <p>VPHADDD:</p> |
| <p>__m256i _mm256_hadd_epi32 (__m256i a, __m256i b)</p> |
| <h2>SIMD Floating-Point Exceptions</h2> |
| <p>None.</p> |
| <h2>Other Exceptions</h2> |
| <p>See Exceptions Type 4; additionally</p> |
| <table class="exception-table"> |
| <tr> |
| <td>#UD</td> |
| <td>If VEX.L = 1.</td></tr></table></body></html> |