Back to opcode table

VPGATHERQD/VPGATHERQQ—Gather Packed Dword, Packed Qword with Signed Qword Indices

Opcode/Instruction Op/En 64/32 bit Mode Support CPUID Feature Flag Description

EVEX.128.66.0F38.W0 91 /vsib

VPGATHERQD xmm1 {k1}, vm64x

T1S V/V

AVX512VL

AVX512F

Using signed qword indices, gather dword values from memory using writemask k1 for merging-masking.

EVEX.256.66.0F38.W0 91 /vsib

VPGATHERQD xmm1 {k1}, vm64y

T1S V/V

AVX512VL

AVX512F

Using signed qword indices, gather dword values from memory using writemask k1 for merging-masking.

EVEX.512.66.0F38.W0 91 /vsib

VPGATHERQD ymm1 {k1}, vm64z

T1S V/V AVX512F Using signed qword indices, gather dword values from memory using writemask k1 for merging-masking.

EVEX.128.66.0F38.W1 91 /vsib

VPGATHERQQ xmm1 {k1}, vm64x

T1S V/V

AVX512VL

AVX512F

Using signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.

EVEX.256.66.0F38.W1 91 /vsib

VPGATHERQQ ymm1 {k1}, vm64y

T1S V/V

AVX512VL

AVX512F

Using signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.

EVEX.512.66.0F38.W1 91 /vsib

VPGATHERQQ zmm1 {k1}, vm64z

T1S V/V AVX512F Using signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.

Instruction Operand Encoding

Op/En

T1S

Operand 1

ModRM:reg (w)

Operand 2

BaseReg (R): VSIB:base,

VectorReg(R): VSIB:index

Operand 3

NA

Operand 4

NA

Description

A set of 8 doubleword/quadword memory locations pointed to by base address BASE_ADDR and index vector VINDEX with scale SCALE are gathered. The result is written into a vector register. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be loaded if their corresponding mask bit is one. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The entire mask register will be set to zero by this instruction unless it triggers an excep-tion.

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruc-tion breakpoint is not re-triggered when the instruction is continued.

If the data element size is less than the index element size, the higher part of the destination register and the mask register do not correspond to any elements being gathered. This instruction sets those higher parts to zero. It may update these unused elements to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

Note that:

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

This instruction has the same disp8*N and alignment rules as for scalar instructions (Tuple 1).

The instruction will #UD fault if the destination vector zmm1 is the same as index vector VINDEX. The instruction will #UD fault if the k0 mask register is specified.

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

Operation

BASE_ADDR stands for the memory operand base address (a GPR); may not exist
VINDEX stands for the memory operand vector of indices (a ZMM register)
SCALE stands for the memory operand scalar (1, 2, 4 or 8)
DISP is the optional 1, 2 or 4 byte displacement
VPGATHERQD (EVEX encoded version)
(KL, VL) = (2, 128), (4, 256), (8, 512)
FOR j (cid:197) 0 TO KL-1
    i (cid:197) j * 32
    k (cid:197) j * 64
    IF k1[j]
        THEN DEST[i+31:i] (cid:197) MEM[BASE_ADDR + (VINDEX[k+63:k]) * SCALE + DISP]), 1)
        k1[j] (cid:197) 0
        ELSE *DEST[i+31:i] (cid:197) remains unchanged*
        ; Only merging masking is allowed
    FI;
ENDFOR
k1[MAX_KL-1:KL] (cid:197) 0
DEST[MAX_VL-1:VL/2] (cid:197) 0
VPGATHERQQ (EVEX encoded version)
(KL, VL) = (2, 64), (4, 128), (8, 256)
FOR j (cid:197) 0 TO KL-1
    i (cid:197) j * 64
    IF k1[j]
        THEN DEST[i+63:i] (cid:197)
        MEM[BASE_ADDR + (VINDEX[i+63:i]) * SCALE + DISP])
        k1[j] (cid:197) 0
        ELSE *DEST[i+63:i] (cid:197) remains unchanged*
        ; Only merging masking is allowed
    FI;
ENDFOR
k1[MAX_KL-1:KL] (cid:197) 0
DEST[MAX_VL-1:VL] (cid:197) 0

Intel C/C++ Compiler Intrinsic Equivalent

VPGATHERQD __m256i _mm512_i64gather_epi32(__m512i vdx, void * base, int scale);
VPGATHERQD __m256i _mm512_mask_i64gather_epi32lo(__m256i s, __mmask8 k, __m512i vdx, void * base, int scale);
VPGATHERQD __m128i _mm256_mask_i64gather_epi32lo(__m128i s, __mmask8 k, __m256i vdx, void * base, int scale);
VPGATHERQD __m128i _mm_mask_i64gather_epi32(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);
VPGATHERQQ __m512i _mm512_i64gather_epi64( __m512i vdx, void * base, int scale);
VPGATHERQQ __m512i _mm512_mask_i64gather_epi64(__m512i s, __mmask8 k, __m512i vdx, void * base, int scale);
VPGATHERQQ __m256i _mm256_mask_i64gather_epi64(__m256i s, __mmask8 k, __m256i vdx, void * base, int scale);
VPGATHERQQ __m128i _mm_mask_i64gather_epi64(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);

SIMD Floating-Point Exceptions

None

Other Exceptions

See Exceptions Type E12.