blob: a243f12075cc2450184739bad8fa38182a89b6e3 [file] [log] [blame] [raw]
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<link href="style.css" type="text/css" rel="stylesheet">
<title>PHSUBW/PHSUBD — Packed Horizontal Subtract </title></head>
<body>
<h1>PHSUBW/PHSUBD — Packed Horizontal Subtract</h1>
<table>
<tr>
<th>Opcode/Instruction</th>
<th>Op/En</th>
<th>64/32 bit Mode Support</th>
<th>CPUID Feature Flag</th>
<th>Description</th></tr>
<tr>
<td>
<p>0F 38 05 /r<sup>1</sup></p>
<p>PHSUBW <em>mm1, mm2/m64</em></p></td>
<td>RM</td>
<td>V/V</td>
<td>SSSE3</td>
<td>Subtract 16-bit signed integers horizontally, pack to <em>mm1</em>.</td></tr>
<tr>
<td>
<p>66 0F 38 05 /r</p>
<p>PHSUBW <em>xmm1, xmm2/m128</em></p></td>
<td>RM</td>
<td>V/V</td>
<td>SSSE3</td>
<td>Subtract 16-bit signed integers horizontally, pack to <em>xmm1</em>.</td></tr>
<tr>
<td>
<p>0F 38 06 /r</p>
<p>PHSUBD <em>mm1, mm2/m64</em></p></td>
<td>RM</td>
<td>V/V</td>
<td>SSSE3</td>
<td>Subtract 32-bit signed integers horizontally, pack to <em>mm1</em>.</td></tr>
<tr>
<td>
<p>66 0F 38 06 /r</p>
<p>PHSUBD <em>xmm1, xmm2/m128</em></p></td>
<td>RM</td>
<td>V/V</td>
<td>SSSE3</td>
<td>Subtract 32-bit signed integers horizontally, pack to <em>xmm1</em>.</td></tr>
<tr>
<td>
<p>VEX.NDS.128.66.0F38.WIG 05 /r</p>
<p>VPHSUBW <em>xmm1, xmm2, xmm3/m128</em></p></td>
<td>RVM</td>
<td>V/V</td>
<td>AVX</td>
<td>Subtract 16-bit signed integers horizontally, pack to <em>xmm1</em>.</td></tr>
<tr>
<td>
<p>VEX.NDS.128.66.0F38.WIG 06 /r</p>
<p>VPHSUBD <em>xmm1, xmm2, xmm3/m128</em></p></td>
<td>RVM</td>
<td>V/V</td>
<td>AVX</td>
<td>Subtract 32-bit signed integers horizontally, pack to <em>xmm1</em>.</td></tr>
<tr>
<td>
<p>VEX.NDS.256.66.0F38.WIG 05 /r</p>
<p>VPHSUBW <em>ymm1, ymm2, ymm3/m256</em></p></td>
<td>RVM</td>
<td>V/V</td>
<td>AVX2</td>
<td>Subtract 16-bit signed integers horizontally, pack to <em>ymm1</em>.</td></tr>
<tr>
<td>
<p>VEX.NDS.256.66.0F38.WIG 06 /r</p>
<p>VPHSUBD <em>ymm1, ymm2, ymm3/m256</em></p></td>
<td>RVM</td>
<td>V/V</td>
<td>AVX2</td>
<td>Subtract 32-bit signed integers horizontally, pack to <em>ymm1</em>.</td></tr></table>
<p>NOTES:</p>
<p>1. See note in Section 2.4, “Instruction Exception Specification” in the <em>Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A</em> and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the <em>Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A</em>.</p>
<h3>Instruction Operand Encoding</h3>
<table>
<tr>
<td>Op/En</td>
<td>Operand 1</td>
<td>Operand 2</td>
<td>Operand 3</td>
<td>Operand 4</td></tr>
<tr>
<td>RM</td>
<td>ModRM:reg (r, w)</td>
<td>ModRM:r/m (r)</td>
<td>NA</td>
<td>NA</td></tr>
<tr>
<td>RVM</td>
<td>ModRM:reg (r, w)</td>
<td>VEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>NA</td></tr></table>
<h2>Description</h2>
<p>(V)PHSUBW performs horizontal subtraction on each adjacent pair of 16-bit signed integers by subtracting the most significant word from the least significant word of each pair in the source and destination operands, and packs the signed 16-bit results to the destination operand (first operand). (V)PHSUBD performs horizontal subtraction on each adjacent pair of 32-bit signed integers by subtracting the most significant doubleword from the least signifi-cant doubleword of each pair, and packs the signed 32-bit result to the destination operand. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.</p>
<p>Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.</p>
<p>128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (VLMAX-1:128) of the corresponding YMM destina-tion register remain unchanged.</p>
<p>In 64-bit mode, use the REX prefix to access additional registers.</p>
<p>VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (VLMAX-1:128) of the destination YMM register are zeroed.</p>
<p>VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.</p>
<p>Note: VEX.L must be 0, otherwise the instruction will #UD.</p>
<h2>Operation</h2>
<p><strong>PHSUBW (with 64-bit operands)</strong></p>
<pre> mm1[15-0] = mm1[15-0] - mm1[31-16];
mm1[31-16] = mm1[47-32] - mm1[63-48];
mm1[47-32] = mm2/m64[15-0] - mm2/m64[31-16];
mm1[63-48] = mm2/m64[47-32] - mm2/m64[63-48];</pre>
<p><strong>PHSUBW (with 128-bit operands)</strong></p>
<pre> xmm1[15-0] = xmm1[15-0] - xmm1[31-16];
xmm1[31-16] = xmm1[47-32] - xmm1[63-48];
xmm1[47-32] = xmm1[79-64] - xmm1[95-80];
xmm1[63-48] = xmm1[111-96] - xmm1[127-112];
xmm1[79-64] = xmm2/m128[15-0] - xmm2/m128[31-16];
xmm1[95-80] = xmm2/m128[47-32] - xmm2/m128[63-48];
xmm1[111-96] = xmm2/m128[79-64] - xmm2/m128[95-80];
xmm1[127-112] = xmm2/m128[111-96] - xmm2/m128[127-112];</pre>
<p><strong>VPHSUBW (VEX.128 encoded version)</strong></p>
<pre>DEST[15:0] ← SRC1[15:0] - SRC1[31:16]
DEST[31:16] ← SRC1[47:32] - SRC1[63:48]
DEST[47:32] ← SRC1[79:64] - SRC1[95:80]
DEST[63:48] ← SRC1[111:96] - SRC1[127:112]
DEST[79:64] ← SRC2[15:0] - SRC2[31:16]
DEST[95:80] ← SRC2[47:32] - SRC2[63:48]
DEST[111:96] ← SRC2[79:64] - SRC2[95:80]
DEST[127:112] ← SRC2[111:96] - SRC2[127:112]
DEST[VLMAX-1:128] ← 0</pre>
<p><strong>VPHSUBW (VEX.256 encoded version)</strong></p>
<pre>DEST[15:0] ← SRC1[15:0] - SRC1[31:16]
DEST[31:16] ← SRC1[47:32] - SRC1[63:48]
DEST[47:32] ← SRC1[79:64] - SRC1[95:80]
DEST[63:48] ← SRC1[111:96] - SRC1[127:112]
DEST[79:64] ← SRC2[15:0] - SRC2[31:16]
DEST[95:80] ← SRC2[47:32] - SRC2[63:48]
DEST[111:96] ← SRC2[79:64] - SRC2[95:80]
DEST[127:112] ← SRC2[111:96] - SRC2[127:112]
DEST[143:128] ← SRC1[143:128] - SRC1[159:144]
DEST[159:144] ← SRC1[175:160] - SRC1[191:176]
DEST[175:160] ← SRC1[207:192] - SRC1[223:208]
DEST[191:176] ← SRC1[239:224] - SRC1[255:240]
DEST[207:192] ← SRC2[143:128] - SRC2[159:144]
DEST[223:208] ← SRC2[175:160] - SRC2[191:176]
DEST[239:224] ← SRC2[207:192] - SRC2[223:208]
DEST[255:240] ← SRC2[239:224] - SRC2[255:240]</pre>
<p><strong>PHSUBD (with 64-bit operands)</strong></p>
<pre> mm1[31-0] = mm1[31-0] - mm1[63-32];
mm1[63-32] = mm2/m64[31-0] - mm2/m64[63-32];</pre>
<p><strong>PHSUBD (with 128-bit operands)</strong></p>
<pre> xmm1[31-0] = xmm1[31-0] - xmm1[63-32];
xmm1[63-32] = xmm1[95-64] - xmm1[127-96];
xmm1[95-64] = xmm2/m128[31-0] - xmm2/m128[63-32];
xmm1[127-96] = xmm2/m128[95-64] - xmm2/m128[127-96];</pre>
<p><strong>VPHSUBD (VEX.128 encoded version)</strong></p>
<pre>DEST[31-0] ← SRC1[31-0] - SRC1[63-32]
DEST[63-32] ← SRC1[95-64] - SRC1[127-96]
DEST[95-64] ← SRC2[31-0] - SRC2[63-32]
DEST[127-96] ← SRC2[95-64] - SRC2[127-96]
DEST[VLMAX-1:128] ← 0</pre>
<p><strong>VPHSUBD (VEX.256 encoded version)</strong></p>
<pre>DEST[31:0] ← SRC1[31:0] - SRC1[63:32]
DEST[63:32] ← SRC1[95:64] - SRC1[127:96]
DEST[95:64] ← SRC2[31:0] - SRC2[63:32]
DEST[127:96] ← SRC2[95:64] - SRC2[127:96]
DEST[159:128] ← SRC1[159:128] - SRC1[191:160]
DEST[191:160] ← SRC1[223:192] - SRC1[255:224]
DEST[223:192] ← SRC2[159:128] - SRC2[191:160]
DEST[255:224] ← SRC2[223:192] - SRC2[255:224]</pre>
<h2>Intel C/C++ Compiler Intrinsic Equivalents</h2>
<p>PHSUBW:</p>
<p> __m64 _mm_hsub_pi16 (__m64 a, __m64 b)</p>
<p>PHSUBD:</p>
<p> __m64 _mm_hsub_pi32 (__m64 a, __m64 b)</p>
<p>(V)PHSUBW:</p>
<p> __m128i _mm_hsub_epi16 (__m128i a, __m128i b)</p>
<p>(V)PHSUBD:</p>
<p> __m128i _mm_hsub_epi32 (__m128i a, __m128i b)</p>
<p>VPHSUBW:</p>
<p>__m256i _mm256_hsub_epi16 (__m256i a, __m256i b)</p>
<p>VPHSUBD:</p>
<p>__m256i _mm256_hsub_epi32 (__m256i a, __m256i b)</p>
<h2>SIMD Floating-Point Exceptions</h2>
<p>None.</p>
<h2>Other Exceptions</h2>
<p>See Exceptions Type 4; additionally</p>
<table class="exception-table">
<tr>
<td>#UD</td>
<td>If VEX.L = 1.</td></tr></table></body></html>