Forumite Members › General Topics › Tech › Windows Talk › 3D Ray Tracing
- This topic has 11 replies, 5 voices, and was last updated 7 years, 10 months ago by
Ed P.
-
AuthorPosts
-
March 19, 2018 at 8:59 pm #17894
It looks like Microsoft have raised the bar with this new 3D rendering api. I have not had time to absorb the implications but at first glance this could do wonders for game engines and may even have implications for better and less costly 3D modelling and rendering engines. (I really must learn Blender this year, as there is nothing cheaper than free!)
Games should benefit by doing away with pre-rendered cut-scenes – instead such things could be done on the fly.
March 20, 2018 at 5:53 pm #17932They bin on about this for years. All the way back to my 6800 ultra.
My current card 1070 pumps out wild amounts of flops.
I think we got some way to go before game engines can really turn on the charm with ray tracing because we have the compute power.
Although I feel this will come sooner than later. 20-t flops zone.
Its funny, my media center has a 610 throw away card in but it matches my old king of the hill 6800 ultra.
I guess my 1070 will be throw away compute power in 10 years time. Unfortunately I will be 63.
April 28, 2018 at 5:45 pm #20145This ray tracing has really caught my attention.
On some demos, significantly more detail is unlocked, obviously because rasterisation is just an approximation.
But I never thought the impact would be so big as today’s games r amazing.
Also I have a strong suspicion that the next Tomb raider game will support real Nvidia ray tracing.
Watch this space. 🙂
April 28, 2018 at 6:40 pm #20151They bin on about this for years. All the way back to my 6800 ultra. My current card 1070 pumps out wild amounts of flops. I think we got some way to go before game engines can really turn on the charm with ray tracing because we have the compute power. Although I feel this will come sooner than later. 20-t flops zone. Its funny, my media center has a 610 throw away card in but it matches my old king of the hill 6800 ultra. I guess my 1070 will be throw away compute power in 10 years time. Unfortunately I will be 63.
A callow youth, Keith!?? That ain’t old…
When the Thought Police arrive at your door, think -
I'm out.April 29, 2018 at 1:04 am #20169The problems with real time rendering for the man at home is the amount of work that you have to do to get something looking half decent. Bear in mind that, unlike full on renderers (Blender included), real time work relies on low poly meshes with highly optimised texture maps. Creation of those maps can take an awful long time.
Having said that, some of the stuff produced on the unreal engine is absolutely amazing, but I shudder to think at the preparation time.
Arch Linux, on a Ryzen 7 1800X, 32 GB, 5 (yes -5) HDs inc 5 SSDs, 4 RPi 3Bs + 1 RPi 4B - one as an NFS server with two more drives, PiHole (shut yours), Plex server, cloud server, and other random Pi stuff. Nice CoolerMaster case, 2 x NV GTX 1070 8GB, and a whopping 32" AOC 1440P monitor.
April 29, 2018 at 8:22 am #20176I need to read the api again, but to me the api did not seem to require improved modelling, triangulation or uv mapping its main focus appears to be that of providing more efficient culling so that only the rays that will appear in a scene actually need to be traced, thus making ray tracing more affordable.
The way I think of it that old school rasterization works from the viewer towards the object and culls the z-map as it goes. The M$ api ray traces only the visible pixels back from the object to the viewer. This incidentally ought to automatically comprehend occlusion effects – the sort of fuzzy edges that objects can have. It should also automatically generate shadow maps.
My guess would be that its first impact will be on very much improved in-game graphics.
April 29, 2018 at 5:05 pm #20203The first time I looked at ray tracing it was on an Atari ST with POV software.
A single scene at 320×200 in 16 colours with 2 lights took 18 hours to render.
Have things improved since then then ?
April 29, 2018 at 10:47 pm #20212For the RT stuff, it’s low poly, with all the detail in the textures, which means putting a hell of a lot more effort into spec, normal, bump and reflections maps than you need to for a full 3D path tracer.
Path tracing, OTOH, tends to be based on highly detailed geometry and physically based shaders.
This is an issue when moving to RT, simply because the required detail isn’t there when converting. ie. The artist has to make standout, and technically superior textures for RT compared with what the path tracer artist has to do.
Arch Linux, on a Ryzen 7 1800X, 32 GB, 5 (yes -5) HDs inc 5 SSDs, 4 RPi 3Bs + 1 RPi 4B - one as an NFS server with two more drives, PiHole (shut yours), Plex server, cloud server, and other random Pi stuff. Nice CoolerMaster case, 2 x NV GTX 1070 8GB, and a whopping 32" AOC 1440P monitor.
April 30, 2018 at 8:10 am #20217Dan, you may be interested in this technical article for Kingdom Come Deliverance (Google the title for some quite stunning eye-candy You-tube RT porn). This game is ‘old-school’ but sets a pretty high bar for realism. As said in this article, they used a texel (pixel texture) count of 256 texels/sq metre of scene. While that requires a lot of work, it requires even better database design to be able to build up scenes from a palette of texel objects.
Where I anticipate the M$ api benefiting is that it will make the 512 texels/metre target more achievable.
April 30, 2018 at 9:25 am #20224Just a follow-up comment, while of course your comment about texture detail is correct it really only applies to background LoD work where probably an average normal map is burn into the textures, but for foreground action work that HAS to be done with polygons and where necessary distorted by skeletons and whatever process is used to transfer the result to the game engine (e.g. Maya->Cry Engine)
April 30, 2018 at 8:45 pm #20253I don’t mean to sound overly critical. RT is a different skill set entirely from my own. For example, this https://www.youtube.com/watch?v=6oo293kIGPQ looks awesome in many parts, but I’d struggle to say it’s photo-realistic.
EDIT: Compared with https://cdna.artstation.com/p/assets/images/images/001/983/250/large/dee-van-hoven-nature-wild-grassfield16.jpg?1455551647
In a few years, I think the two (path tracing and RT) will come together, both as the programming and the hardware improve, but at the moment, the best path traced anims still leave the best RT behind.
Arch Linux, on a Ryzen 7 1800X, 32 GB, 5 (yes -5) HDs inc 5 SSDs, 4 RPi 3Bs + 1 RPi 4B - one as an NFS server with two more drives, PiHole (shut yours), Plex server, cloud server, and other random Pi stuff. Nice CoolerMaster case, 2 x NV GTX 1070 8GB, and a whopping 32" AOC 1440P monitor.
May 1, 2018 at 7:18 am #20259Dan, the problem with game RT rendering is that it has to make compromises to meet the insane target of an unscripted rendered scene in less than 0.017 seconds on a high mid-range CPU and GPU . A typical Blender or other cut-scene render can take as long as necessary using as large a render farm as it needs and just save out the finished frame.
The frame rate is the killer that drives reality compromises as a game that achieves <60fps gets very critical review comments. Level of Detail (LoD) modelling (polygon count/object goes down with distance) is a typical compromise, but you will still hear complaints about ‘popping’ as objects move from one LoD to another. There are many other artificial (unnatural) artifices that the developers have to use to reduce render time. Even then a glance at any face in any RT scene shows up as glaringly unreal – as you know flesh is very difficult to render even in pre-renders and RT just cannot afford the fps to do anything except teeter on the edge of the unreal valley.
If Moore’s law holds maybe another 10 years will see some of these artifices becoming unnecessary, and it will become more common to ease the burden on artists using natural scans of scenes and objects as shown in the demo you linked. I’d also see some of these techniques spilling back into the world of Blender as artistic time constraints also restrict time spent on design and creativity. I think it could be 20 years before the difference for pre-renders becomes virtually identical to RT renders, but that of course does not allow for the impact of 8K screens or 48bit true colour!
I suspect it is a ‘competition’ in which pre-rendering will always be slightly ahead.
-
AuthorPosts
- You must be logged in to reply to this topic.
