Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect GLSL conversion when indexing into a structure. #21

Open
jbsheblak opened this issue Apr 3, 2017 · 1 comment
Open

Incorrect GLSL conversion when indexing into a structure. #21

jbsheblak opened this issue Apr 3, 2017 · 1 comment

Comments

@jbsheblak
Copy link

I'm having an issue with a conversion from an HLSL blob to GLSL v440 when indexing into a structure.

This is the original source:

struct STestStruct
{
   float4 value0;
};

cbuffer psBuffer
{   
   STestStruct             uc_values[4];
   uint                    uc_idx;
};

struct PS_INPUT
{
   float4 position : SV_POSITION;
};

struct PS_OUTPUT
{
   float4 color : SV_TARGET0;
};

PS_OUTPUT main(PS_INPUT input)
{
   PS_OUTPUT output;   
   output.color = uc_values[uc_idx].value0;   
   return output;
}

This is the HLSL blob generated by FXC:

//
// Generated by Microsoft (R) HLSL Shader Compiler 9.29.952.3111
//
//
//   fxc /T ps_4_1 D:/shaders/conversion_tests/test.fs.hlsl
//
//
// Buffer Definitions:
//
// cbuffer psBuffer
// {
//
//   struct
//   {
//
//       float4 value0;                 // Offset:    0
//
//   } uc_values[4];                    // Offset:    0 Size:    64
//   uint uc_idx;                       // Offset:   64 Size:     4
//
// }
//
//
// Resource Bindings:
//
// Name                                 Type  Format         Dim Slot Elements
// ------------------------------ ---------- ------- ----------- ---- --------
// psBuffer                          cbuffer      NA          NA    0        1
//
//
//
// Input signature:
//
// Name                 Index   Mask Register SysValue Format   Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_POSITION              0   xyzw        0      POS  float
//
//
// Output signature:
//
// Name                 Index   Mask Register SysValue Format   Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_TARGET                0   xyzw        0   TARGET  float   xyzw
//
ps_4_1
dcl_globalFlags refactoringAllowed
dcl_constantbuffer cb0[5], dynamicIndexed
dcl_output o0.xyzw
dcl_temps 1
mov r0.x, cb0[4].x
mov o0.xyzw, cb0[r0.x + 0].xyzw
ret
// Approximately 3 instruction slots used

and this is the GLSL that is generated:

#version 440
#extension GL_ARB_separate_shader_objects : enable

struct uc_values_Type {
        vec4 value0;
};
layout(binding = 0, std140) uniform psBuffer {
        uc_values_Type uc_values[4];
        uint uc_idx;
};
layout(location = 0) out vec4 SV_TARGET0;
uint u_xlatu0;
void main()
{
    //MOV
    u_xlatu0 = uc_idx;
    //MOV
    SV_TARGET0 = uc_values[0].value0[int(u_xlatu0)];
    //RET
    return;
}

The issue is on the line with the assignment to SV_TARGET0. 'uc_idx' should be used to index into the 'uc_values' array, but instead is indexing the first member of the structure in that array.

I believe the correct translation should be:

SV_TARGET0 = uc_values[int(u_xlatu0)].value0;

I'm not very familiar with the code, but after stepping through my test case, it seems like this involves ShaderInfo::GetShaderVarIndexedFullName not taking into account Operand::m_suboperands. When decoding the shader, the operand has a fixed immediate value of 0 from the 'r0.x + 0' expression but also a relative value from the 'r0.x' portion. This could all be wrong though, as I'm not very familiar with the code. :)

The blob was generated with: "fxc.exe" /T ps_4_1 test.fs.hlsl"
HLSLcc was run with LANG_440 set and the following flags:
HLSLCC_FLAG_UNIFORM_BUFFER_OBJECT | HLSLCC_FLAG_INOUT_SEMANTIC_NAMES | HLSLCC_FLAG_COMBINE_TEXTURE_SAMPLERS | HLSLCC_FLAG_NVN_TARGET

Let me know if I can provide any additional information!

@shadowndacorner
Copy link

shadowndacorner commented Jun 23, 2017

I'm having this issue as well when indexing into a UBO. It's easy enough to fix by hand, but that kind of defeats the purpose.
`
struct LightData
{
float4 l_Position;
float4 l_Direction;
float4 l_Color;
float4 l_Params;
};

cbuffer LightBuffer
{
LightData g_Light[64];
};

...

for(int i = 0; i < min(NumLights, 64); ++i)
{
float3 lightDir = g_Light[i].l_Direction;
...`

The loop compiles into
u_xlat4.xyz = g_Light[0].l_Direction[u_xlati17].xyz;
where it should compile into
u_xlat4.xyz = g_Light[u_xlati17].l_Direction.xyz;

The shader was compiled under LANG_430 with only HLSLCC_FLAG_UNIFORM_BUFFER_OBJECT enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants