What's new

d3d help

Doomulation

?????????????????????????
Okay, this is a project I was messing around a little with some time ago...
The thing was, I couldn't get it working right. There are two cubes which I render, the second cube INSIDE the first one.
The problem is this:
- either the second cube is completly clipped (ie invisible, doesn't exist)
- or, the clipping starts to get all funny.

To see what I mean, see the screenies I attached.
In screenie, I've used pDevice->SetRenderState(D3DRS_ZENABLE,true);
In the second, pDevice->SetRenderState(D3DRS_ZENABLE,false);

I'm showing the rest of the d3d code (save for the vertices, to save space), in case you might figure out what I'm doing wrong.

Code:
bool CD3DTestDlg::InitD3D()
{
	pD3D = Direct3DCreate9(D3D_SDK_VERSION);
	
	ZeroMemory(&m_PresentParameters,sizeof(m_PresentParameters));
	m_PresentParameters.Windowed = true;
	m_PresentParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
	m_PresentParameters.EnableAutoDepthStencil = true;
	m_PresentParameters.AutoDepthStencilFormat = D3DFMT_D16;
	m_PresentParameters.hDeviceWindow = m_hWnd;
	m_PresentParameters.BackBufferHeight = m_Height;
	m_PresentParameters.BackBufferWidth = m_Width;
	m_PresentParameters.BackBufferFormat = D3DFMT_R5G6B5;
	//m_PresentParameters.MultiSampleType = D3DMULTISAMPLE_NONE;

	HRESULT hr;
	if (FAILED(hr = pD3D->CreateDevice(0,D3DDEVTYPE_HAL,m_hWnd,
		D3D_HWVP/*|D3DCREATE_MULTITHREADED*/,&m_PresentParameters,
		&pDevice)))
	{
		ErrorBox(hr,"Error creating a D3D Device!\nReturned error: ");
		return false;
	}
	return true;
}

bool CD3DTestDlg::InitScene()
{
	HRESULT hr;
	hr = pDevice->SetRenderState(D3DRS_LIGHTING,false);
	hr = pDevice->SetRenderState(D3DRS_CULLMODE,D3DCULL_NONE);
	//hr = pDevice->SetRenderState(D3DRS_ZENABLE,true);
	hr = pDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,true);
	hr = pDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
	hr = pDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
	hr = pDevice->SetFVF(D3DFVF_CUSTOMVERTEX);
// Some more code, which inits the vertices, copies them into vertex buffers and creates the textures.
	return true;
}

bool CD3DTestDlg::RenderScene()
{
	if ( CheckDevice() )
	{
		HRESULT hr;
		hr = pDevice->Clear(0,NULL,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,0,1.0f,0);
		hr = pDevice->BeginScene();
		
		Transform();

		hr = pDevice->SetTexture(0,pTextureFront);
		hr = pDevice->SetStreamSource(0,pCube,0,sizeof(CUSTOMVERTEX));
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,0,2);
		pDevice->SetTexture(0,pTextureBack);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,6,2);
		pDevice->SetTexture(0,pTextureLeft);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,12,2);
		pDevice->SetTexture(0,pTextureRight);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,18,2);
		pDevice->SetTexture(0,pTextureUp);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,24,2);
		pDevice->SetTexture(0,pTextureDown);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,30,2);

		pDevice->SetStreamSource(0,pCube2,0,sizeof(CUSTOMVERTEX));
		pDevice->SetTexture(0,pTextureCube2);
		pDevice->DrawPrimitive(D3DPT_TRIANGLELIST,0,12);

		hr = pDevice->EndScene();
		hr = pDevice->Present(NULL,NULL,NULL,NULL);
	}
	
	//fRotation += 0.001f;
	return true;
}

bool CD3DTestDlg::Transform()
{
	D3DXMATRIX matProjection, matView, matWorld, matWorld2, matWorld3;

	D3DXMatrixRotationX( &matWorld,  timeGetTime()/1000.0f );
	D3DXMatrixRotationY( &matWorld2, timeGetTime()/1000.0f );
	pDevice->SetTransform(D3DTS_WORLD, &(matWorld2));

	D3DXMatrixLookAtLH(&matView, &D3DXVECTOR3(0.0f, 0.0f, 3.0f),
								 &D3DXVECTOR3(0.0f, 0.0f, 0.0f),
								 &D3DXVECTOR3(0.0f, 1.0f, 0.0f));
	D3DXMatrixPerspectiveFovLH(&matProjection,D3DX_PI/4.0f,
		(float)m_Width / (float)m_Height,1.0f,1000.0f);
	pDevice->SetTransform(D3DTS_PROJECTION,&matProjection);
	pDevice->SetTransform(D3DTS_VIEW, &matView);
	return true;
}

This should be everything relevant for the d3d. Vertices, vertex buffer and texture creation has been omitted. Note that also the images I create textures from are transparent (the images!). Which is to make me able to see through the first cube.

While I'm at it... I'm not really good at calculating normals. I was going to try lighting, but I'm not really that good at computing the normals for the vertices. If some typical sample code could be provided I'd be eternally grateful.
 

BGNG

New member
I use OpenGL instead of Direct3D, though their developer-end operations are majoratively the same. This appears to be a classic problem with incorrect/no depth testing, as well as something you don't know about blending...

=======================
Part 1: Depth Testing
=======================

Look at the second picture you posted. You can see the "ce" of the "Left Face" get cut off by the polygon that makes the "Downside Face"... If it were simply transparent, it would have been blended over the top and you could see, though distorted, the entire text "Left Face."

This leads me to believe that the "Downside Face" is being drawn AFTER the "Left Face". With depth testing disabled or incorrectly configured to anything but LEQUAL (Less than or Equal To, OpenGL lingo), it would cause the pixels of the bottom face to be drawn on top of the left face, instead of not being drawn at all, which would be "physically" correct.

=======================
Part 2: Blending
=======================

I think what you intended to do in this application is make the smaller cube, which is inside the larger cube, visible by making the larger cube transparent.

Your problem is that you're drawing the foreground polygons first. The larger cube, in your particular program, will ALWAYS have polygons drawn in front of the smaller cube. So, logically, any thing behind those polygons (with depth testing enabled) will not be seen.

What you were probably thinking is "Oh, I'm making transparent polygons. That means you can see through them." Well... That's only mostly true. Alpha Blending only affects the way a polygon's pixels are drawn to the screenbuffer, but not how depth testing is applied. You're basically drawing a solid polygon that only LOOKS transparent, not unlike the "walk of faith" scene in "Indiana Jones and the Search for the Holy Grail," where the bridge is painted to resemble the rocks below.

=======================
Part 3: The Solution
=======================

So taking these things into account, fixing it is actually really easy. Simply draw the smaller cube first, then the larger, blended one. That ensures that the "transparent" polygons will include the pixels of the cube behind them, thusly creating the illusion of a see-through box.

Remember this about blending: ALWAYS draw from the back to the front to make sure it all works correctly.
 
Last edited:

blight

New member
seems like you disabled culling (hr = pDevice->SetRenderState(D3DRS_CULLMODE,D3DCULL_NO NE);)
also for transparent polygons the order in which they are drawn does matter (they have to be drawn as the last polygons, all solid ones first, and from front-to-back or something :p)
 

BGNG

New member
It isn't a matter of culling. That simply choses whether or not to draw a polygon. This one has to do with a blending/depth testing mix of things.
 
OP
Doomulation

Doomulation

?????????????????????????
Rendering the first ones first (the solid) is what I'm doing... heh, so that really can't be the solution blight ;) I tried to disable culling as to not making it clip the lesser cube.

BGNG: thanks for the heads up. I might try to reverse the rendering of the cubes. The matter I also used transparent textures because I couldn't get it transparent in any other way :p I'm still a n00b to d3d.
 

BGNG

New member
You said it yourself: "...the second cube INSIDE the first one,"... That more or less says "I'm rendering the smaller one second." Swap 'em if this is the case, and learn English if it is not. (-:
 
OP
Doomulation

Doomulation

?????????????????????????
You're right :)
I switched them and it worked!
Funny thing is, though, even though it worked I find it strange...
when I disabled z enable again, which means that same scenario as the second pic I attached, the cube disappeared, while still getting weird clipping errors. Wonder why.

But when it's enabled it works like a charm! Big thanks.
Now if I only could remember all this about the normals so I can add lighting to it...
 

BGNG

New member
Imagine a normal as a line sticking perfectly out from a polygon. If your desk were a polygon, the normal would be a pencil sticking straight up out of it. A "Normal" is a vector exactly 1 unit long.

Anyhoo... Let's say that you have a square with the following verticies:

V1 = -1, 1, 1
V2 = -1, -1, 1
V3 = 1, 1, 1
V4 = 1, -1, 1


Since the Z values are all 1, we can interperate this as being one unit towards the camera from the origin. Therefore, since the polygon is facing us directly, the normal will also be facing us directly. So the normal in this case, perfectly perpendicular to the polygon, would be 0, 0, 1



Now, let's say we have a square with these coords:

V1 = 1, 1, -1
V2 = 1, -1, -1
V3 = 1, 1, 1
V4 = 1, -1, 1


You could say it's pointing to the right, right? Right. But let's pretend this is actually the left face of a cube that's 2 units to the right. So that means it's normal would be jutting out to the left. So its normal would logically be -1, 0, 0


Lastly, let's consider a square that's slightly rotated, such that it's facing to the right, and also up. That means that the normal is somewhere between facing us directly, facing right directly, and facing up directly. You might say the Normal could easily be 1, 1, 1, but that's not entirely true. 1, 1, 1 is not a normal, because it's longer than 1 unit.

The function Length = SqrRoot(X ^ 2 + Y ^ 2 + Z ^ 2) can be used to find the actual length to a point from the origin. To "Normalize" (yes, that's what they call it) the vector, simply multiply each value by 1/Length. In this case, SqrRoot(1 ^ 2 + 1 ^ 2 + 1 ^ 2) = SqrRoot(3), so multiply 1/that by our values (all ones) to make the resulting Normal Vector 1 * (1/SqrRoot(3)), 1 * (1/SqrRoot(3)), 1 * (1/SqrRoot(3))...
__________

Okay, that's all a bunch of hoo-hah... But remember that lighting is applied to a polygon, however it shades the verticies. If all the verticies in a polygon have the same normal vector, then they will all be shaded simultaneously, thusly causing the entire polygon to light up at the same time. They call that Face Lighting, even though it's still applied to verticies.

Also remember that a vertex of a different polygon, even if it has the same coordinates as another vertex, is still its own entity, and lighting can be applied to that vertex separately from all the others. This is how, in a cube, one "corner" lights three sides differently, even though all three verticies have the same coordinates.

Ideally, all verticies with the same coordinates will have the same Normal vector, because things like lighting, sphere mapping, etc. will blend it accross the polygon instead of making the polygon look super-reflective.
__________

You've probably got questions... So ask any you may come up with...... Which may be quite a few...
 
Last edited:

Top