Trying to add a new pass to render proxy mesh Normals in UV space #559
aVersionOfReality
started this conversation in
General
Replies: 1 comment
-
Assuming the meshes have different topology, and that they're animated so they need to be baked constantly, I would do something like:
I'll link you to some references in the code that do similar stuff:
Maybe start by supporting a single object and a single baked texture, then move on to multiple objects and the TextureArray. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to make a setup that will render Object Normals in UV space from a proxy mesh. Basically, a live baking setup (which I guess is also similar to rendering the Shadow Map). This is to solve the problem of mesh level Normal Transfer not being viable in real time rendering (hopefully I can get something working in Unity/Unreal in the long run too, but gotta start here.) I am looking for any input on how to get this working, or if there's a better way to approach it, or if this won't work at all and is a bad idea.
Here's the basic idea I have so far:
This UV space of the proxy mesh is then rendered with an orthographic camera, and a Normals shader.
![image](https://private-user-images.githubusercontent.com/36803956/338017161-8ba5ccb0-15b7-4064-9f5e-162696d36e1f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjIzNzA1NzMsIm5iZiI6MTcyMjM3MDI3MywicGF0aCI6Ii8zNjgwMzk1Ni8zMzgwMTcxNjEtOGJhNWNjYjAtMTViNy00MDY0LTlmNWUtMTYyNjk2ZDM2ZTFmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MzAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzMwVDIwMTExM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM3MjY4MmE3Yjk4NDZkNjUwZDVjNDk5MjBhYTYzYTAzYTJhNjUxZmVjYjg2OGNjN2U5NDkyMTA1YmVhZGJiZWEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.S_PNHhfZcKMkinQNeFitWD-Hl-8zSfKHipk9hsanLBI)
When the mesh is rendered regularly, this rendered texture is used as the Normals.
I assume this would be something like this, with options for which UV map(s) to use, resolution, etc:
![image](https://private-user-images.githubusercontent.com/36803956/338021352-6f8cf554-17f5-4ede-9864-d107786e57ae.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjIzNzA1NzMsIm5iZiI6MTcyMjM3MDI3MywicGF0aCI6Ii8zNjgwMzk1Ni8zMzgwMjEzNTItNmY4Y2Y1NTQtMTdmNS00ZWRlLTk4NjQtZDEwNzc4NmU1N2FlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MzAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzMwVDIwMTExM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ0OTQ0ZDFiYjgyNTQ2YmY3NDY2YTExODgwNWZlYjVhMGJlOTc4OTI0YWVmMDBmOTA1MTEzZjNlZjNlMWRjNzQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.ByZyvfWpD0omgjwvzgChhCCUj1iQgpSSKWYqb2gxDw0)
So I've got the basic theory. What I'm missing is how to setup another pass and camera. Is this mostly a matter of python or glsl? How is the shadow map camera done?
I've been looking up some general glsl info, but the problem I'm running into is that I don't know enough to separate general glsl info from any specific context. Lots of what I'm finding is written for other engines, and I don't know what's a glsl thing and what's a Unity thing vs Malt context thing. My glsl knowledge is the "node to code" sort. I don't know anything about pipeline.
Beta Was this translation helpful? Give feedback.
All reactions