我们已经向您展示了在您应该首先扩展相应的示例。编程语言为WebGL
这应该在任何浏览器中运行,但我们以前遇到过Safari的问题,因此建议使用不同的浏览器,如Chrome。不要用任何其他编程语言、纸张或伪代码编写任何答案。不写代码
在#define块之外。请记住,要经常将解决方案保存到.uclcg文件中。最后,通过Moodle提交该文件。简介我们已经准备了一个简单的路径跟踪框架,您将被要求进行扩展。这个框架与第一个课程(光线追踪)的解决方案非常相似,只是我们删除了圆柱体
交叉点和点光源。和前面的课程一样,我们以人为的低分辨率进行,原因有两个:计算机速度慢并且为了让您能够看到各个像素,从而注意到细微的效果。框架的新修订允许使用新按钮更改分辨率。

1. Coursework III:Path TracingCOMP0027 Team

November 28, 2022We have shown you the framework for solving the coursework at https://uclcg.github.io/uclcg/.You should start by extending the respective example there. The programming language is WebGL(https://www.khronos.org/registry/webgl/specs/latest/1.0/) and the OpenGL ES Shading Language(GLSL)www.khronos.org/files/opengles_shading_language.pdf. This should run in any browser, butwe formerly experienced problems with Safari and thus recommend using a different browser such as Chrome.Do not write any answers in any other programming language, in paper, or pseudo code. Do not write codeoutside the #define blocks.Remember to save your solution often enough to a .uclcg file. In the end, hand in that file via Moodle.The total points for this exercise is 100.Please refer to Moodle for the due dates.Introduction We have prepared a simple path tracing framework you would be asked to extend. Theframework is very similar to the solution of the first coursework (ray-tracing), just that we removed cylinderintersections and point lights.As in the previous coursework, we proceed in an artificially-low resolution for two reasons: Slow computersand in order to allow you to see individual pixels to notice subtle effects. The new revision of the frameworkallows to change the resolutions with a new button.The path tracer is progressive: It will permanently run when loaded and compute a new sample at eachpixel. The result already gets averaged over time by the framework. The solution will be reset every timeyou change the code. To add new iterations, press the play or stop buttons in the web page. The currentsolution will run for 1000 samples.The results are also tone-mapped and gamma-corrected. Typically, you do not need to change the code in thetab “Tonemapping” to solve the coursework. Modifying it might help for visual debugging in some cases,though. Tone-mapping will reduce the physical light values to a range your display can actually reproduce andthat appears most plausible visually. Gamma-mapping will convert physically linear units your simulationproduces into non-linear values your display expects. If you know the gamma value of your monitor, you canchange the γ = 2.2 we assume to something better to see more details in light or shadows (some machines goas low as γ = 1.6 these days).11 Let there be light! (5 points)In the beginning you will see only the diffuse colors.First, extend the Material struct to hold the information that every material in path tracing has to haveto become a light source (2 points). For our example, the first sphere should become a source emitting150.0 · (0.9, 0.9, 0.5) units of light, the second should emit 150.0 · (0.8, 0.3, 0.1) and all other object shouldnot be light sources. Second, use this information in getEmission (1 points). You now should now see thedirect light as seen below. Write two sentences about what the gamma is doing in our case (2 points).COMP0027 (Computer Graphics) 22 Now bounce (10 points)

2.The current code

only able to render the first bounce of light between the object and the camera. We willnow add multiple bounces.To do so, first implement the function randomDirection to return a random direction in 3D. Param-eter to this function is the dimensionIndex of the bounce. The i-th bounce’s sampling dimension isPATH SAMPLE DIMENSION+2 i. This index will later be used in advanced (e.g., Halton) sampling that proceedsdifferently in different dimensions.The lecture has explained how picking a random 3D point in the unit cube and normalizing it is not a validsolution. Instead, you should use the formula below to compute a vector ωi = (x, y, z), where ξ0 and ξ1 arerandom sample coordinates in (0, 1) provided by our function sample (2 points)The formula above has two logical parts, can you indentify them?Implement the formula by calling twoseparate functions (1 points) instead of one go and give them proper names (1 points). What would be aunit test of this, that was to involve a third function and what is that third function? (2 points). Implementthis test and describe how it would be ran by setting a simple flag (2 points).Next, you need to use this function to trace a ray in the direction ωi. The function intersectScene, similarto the one used in the first coursework, is at your disposal to do so (2 points).Please use the constant const int maxPathLength defined on the top to control the maximal length of thepath you sample. At maxPathLength = 2, the image should look as the one below:COMP0027 (Computer Graphics) 33 Throughput (30 points)
The current solution solves an alternative, non-physical equation that misses the reflectance and the geometricterm:

3.Implementing both methods

they are not called. Second, those functions haveto be multiplied up correctly to compute the throughput of every light path vertex to the camera (10 points).You might want to remember how the variable weight worked in the backward ray-tracer in coursework 1.Finally, add comments to explain your implementation of getGeometricTerm and getReflectance and howyou used them to compute the throughput (10 points).Solving this, a converged image will look thefollowing:COMP0027 (Computer Graphics) 44 Variance reduction: Sampling patterns (20 points)Currently, we use random numbers using a simple linear congruential generator frand for every pixel. Wehave seen how quasi-random patterns, such as Halton reduce variance in the lecture. Here, we will make useof them.Change the implementation of sample to use Halton sampling (5 points). Note, how sample takes thedimension as a parameter. For the frand implementation, this parameter was not relevant, for Halton it willbe. You might find using the existing function prime(int index) that returns the i-th prime number forma table useful. We do not require you to explain Halton sequences or radical inversion or Van-der-Corputsequences, as long as you use it correctly.You will notice the effect by a quicker convergence, less noise, but structural patterns. As both methodsare unbiased, the end-result will be the same. Consider comparing the images with and without the change:They should be identical.Let si,j,k be the sample in (0, 1) which our function sample returns for dimension i, sample j and pixel k.Using the same Halton pattern si,j,k = Hi,j might produce quicker convergence, but leads to structuredpatterns you will find objectionable as all pixels k have the same value. Using a uniform random valuesi,j,k = ξi,j,k avoids this, but converges slower. The lecture has mentioned Cranely-Petterson rotation tocombine both: in dimension i all pixels use the same pattern Hi,·, but shifted modulo-1 by a random per-pixeland per-dimension (but not per-sample) offset ξi,k as in s(i, j, k) = fract(Hi,j + ξi,k). Add code for thisimprovement (10 points) and explain your implementation (5 points).5 Anti-aliasing (10 points)We have seen that other effects such as depth of field, motion blur and anti-aliasing can be understood asgeneralized sampling domains. A simple domain to tackle is the pixel domain. Add pixel sampling witha box filter to produce anti-aliasing (5 points) and explain your implementation (5 points). After addinganti-aliasing the edges should be nicely blurred as seen in the image below.COMP0027 (Computer Graphics) 56 Importance sampling is important (20 points)This is more open-ended and advanced questions.Implement importance sampling for the geometric term (10 points). The below figure shows in the top row animage with importance sampling and in the bottom row an image without at resolution 256×128 and 10 samples.What is the most basic job you expect such an importance sampling to do? (1 points). Write a comment inthe importance sampling function.What is the difference between an importance sampled result and one without it for infinitely many samples?Make sure your implementation approaches said difference for many (e.g., 10,000 ) samples! (1 points). Writea comment in the importance sampling function.Can you demonstrate an extension to multiple light sources (emissive sphere in our case) that have verydifferent values of emissiveness? Please implement and describe in the way you see fit (6 points).Can you also propose and implement in a similar way what would be to do if some of those spheres hadpositions very different from the positions seen in the framebuffer (2 points)?7 Motion Blur (5 points)Add motion blur to the intergator (5 points).COMP0027 (Computer Graphics) 6If you give a motion of (?3, 0, 3) to the sphere with index 2 and a motion of (2, 4, 1) to sphere with index 3,the image will look like this:COMP0027
WX:codehelp


豪气的饭盒
1 声望0 粉丝