intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Essential Blender- P26

Chia sẻ: Cong Thanh | Ngày: | Loại File: PDF | Số trang:30

43
lượt xem
7
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Essential Blender- P26:You may copy and distribute exact replicas of the OpenContent (OC) as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the OC a copy of this License along with the OC.

Chủ đề:
Lưu

Nội dung Text: Essential Blender- P26

  1. Figure RCD.33: Render Layer settings for the foreground.
  2. Figure RCD.34: Render Layer settings for the background. Notice that the bottom set of Layer buttons for the "1 Render Layer" layer only includes objects from scene Layers 1 and 11. The Layer buttons for the "Background" layer include objects from scene Layer 2. Looking at the node network, a new Render Layer node has been created with Add->Input- >Render Layers, and set to use the "Background" render layer at the bottom of the panel. As you will only be darkening and blurring this layer, you can stick with the default "Combined" pass.
  3. Figure RCD.35: The "Background" Render Layer node. Immediately after the "Background" Render Layer, we have added an RGB Curves node to darken and reduce the contrast of the render. Contrast can be reduced by performing the opposite of the "S Curve" — darkening the light areas and brightening the shadows. Before putting both layers together, though, you can use an old trick to help bring out the foreground objects. Quick and Dirty Depth of Field A simple blur applied to the background makes it look as though the camera lens is focused on the gauge.
  4. Figure RCD.36: The Blur node for the background. A Gaussian blur has been applied with X and Y settings of 5. We have used the Gamma button to emphasize the bright parts of the image, ensuring that the out-of-focus dials remain visible. Also, as we're pretending that the background is blurred due to camera focus, it might be worth it to use the Bokeh option.
  5. Figure RCD.37: The blurred, darkened background layer. The combination of the steam gauge with the background can be accomplished, once again, with the Mix node. This time, however, you will use the default Mix mode. How can you get the node to not blend the entire area of the images together, though? As you've already learned, adjusting the Factor affects how much of the image from the lower input socket is composited over the other. In addition to just being a number, though, the Factor setting can also use an image as its input. By connecting the Alpha pass from the original Render Layers node, portions of the image that were completely opaque (the gauge itself) receive a Factor setting of 1.0, while the non-rendered areas receive a Factor of 0.0. The result is that the Alpha pass is used as a mask for the Mix node.
  6. Figure RCD.37.1: The Alpha Channel from Render Layer 1.
  7. Figure RCD.38: Mixing the background with the rendered element.
  8. Figure RCD.39: The rendered, composited image with background. Before you finish, you'll look at one more excellent use of the Compositor, one that's suited to animation but that can also enhance single-frame renders. Vector-Based Motion Blur Load the file "CompositeStage7.blend" and render to fill the passes.
  9. Figure RCD.40: The node network for compositing the spinning pointer. With this file, you will produce the animation. However, as the only thing that moves is the pointer on the gauge's face, it would be a waste of time to render the entire image once for each frame. The animation for this piece is 250 frames long, and each frame takes, on the computer used for this discussion, almost a minute to render. That is almost four hours of render time. If you use a single minute-long render to produce a background, then render only the pointer as it spins, you can reduce the per-frame render time to around two seconds, saving nearly three hours and fifty minutes of render time! In this new file, you will see that only three objects exist: the pointer and the main body and face of the gauge. You will only use the render of the pointer when you make the final composite, but the shape of the gauge itself will be useful too. When producing an effect like this, you will need to have already rendered the rest of the image, without the animated portions, to use as a background. We have already done that in the example file, bringing the image into the Compositor with an Image node found in Add->Input->Image. Also, the only 3D objects left in the file are the pointer itself, the main gauge body and face, and the lamps. If you had wanted, you could have simply moved the extra objects to a disabled layer.
  10. Blender has two methods of producing motion Blur. The older method, available with the "MBLUR" toggle in the Render buttons, relied on rendering an entire scene several times on fractional frame numbers, then combining the results. Of course, this came at the cost of having to render your whole scene up to sixteen times per frame. Vector-based Motion Blur, on the other hand, uses the Compositor to examine how the objects in a scene are moving, then builds a new image with moving objects smeared along their trajectories and blended into the scene. Figure RCD.41: The Vector Blur node. Vector Blur is found under Add->Filter->Vector Blur. To make it work, you will need to have some sort of image to blur (either a Combined pass or a composited image), and the Z and Vec passes enabled in the Render Layers tab of the Render buttons. In this example, all three input sockets connect directly to their output counterparts on the "1 Render Layer" node.
  11. Figure RCD.42: The pointer blurred on Frame 189, before compositing. Obviously, objects will need to be in motion on the rendered frame for Vector Blur to have any effect. Note: As you need a good Alpha channel with which to composite, remember to switch the Renderer from Sky to Key mode on the Render tab.
  12. Figure RCD.43: Key mode on the Render tab. Completing the Shot To finish this example, the dial needs to be mixed back over the background image.
  13. Figure RCD.44: The Alpha Over node. When compositing an image with built-in Alpha (a render of a lone object like the pointer), the AlphaOver node does the job. AlphaOver is found in Add->Color->Alpha Over. It follows the same socket stacking rules as the other nodes, with the base image in the upper socket and the image with Alpha in the lower socket. In the example, the saved image of the gauge is used as a backdrop in the top Image input, while the vector-blurred pointer with built-in Alpha fills the bottom Image input. You can see from the final composite, though, that something is wrong.
  14. Figure RCD.45: Composite with the pointer sticking out. One last trick, then, to properly mask the pointer. This is why you still have the gauge body and face hanging around. In this file, both the gauge body and face have been placed on Layer 2, and a separate Render Layer created for them called "Gauge Body." With the body itself selected, it has been assigned an Object Index by using the "PassIndex" spinner on the Objects and Links panel of the Object buttons (F7).
  15. Figure RCD.46: The PassIndex of the body set to 1. In the Render Layer settings for the "Gauge Body" layer, you can see that all passes have been disabled with the exception of the IndexOb pass. You don't need to care about colors, materials or shading here: you want a pass that will generate a mask of this object to use on the pointer. The PassIndex value of all objects defaults to 0 unless changed by you. By assigning a PassIndex of 1 to the gauge body in the Object buttons, you will be able to single it out in the Compositor.
  16. Figure RCD.47: The node network to build a mask from an Object Index pass. The IndexOb pass from the Gauge Body Render Layer (note how with no Combined pass sent, there is no image at all in the preview) is connected to an ID Mask node, from Add->Converter- >ID Mask. The ID value in the ID Mask node is set to 1, to correspond with the value you set on the 3D object. After that, an RGB Curves node is used to invert the resulting mask. That image fills in the Factor input socket on the AlphaOver node, correctly masking the spinning pointer and completing your shot.
  17. Figure RCD.48: The final shot, correctly composited with the animated blurred spinner. Getting the Shot Out of Your Department So, you’ve finally finished the job. The managers who have been planning the production had allocated five hours to your department on this shot for rendering and sweetening. Because you’re a pro with the Compositor, you were able to set up the nodes in only a half hour (perfectly reasonable once you’re experienced), and rendered the finished animation frames before the rest of the first hour was up. Have a sandwich. Grab some coffee. You’ve earned it. Well, the Compositor’s earned it, but you can take the credit.
  18. Chapter 13: Render Settings: Discussion By Roland Hess There are only a few useful settings for the renderer that are not related to compositing. RSD.01: The Render Buttons. The Render buttons are accessible from any buttons window, and can be found by clicking on the Scene context and Render sub-context on the header, or by pressing F10. When rendering, there are several things you need to specify: the render size, where and in what format to save the finished product, and the quality options you would like the renderer to use. Render Size The finished size of the render is chosen in the Format panel, with the SizeX and SizeY controls. The column of buttons to the right contains preset values for different rendering tasks.
  19. RSD.02: Setting render size and preset buttons on the Format panel. Output Format Rendered images are not automatically saved. You must press F3 to save them, or select "Save Image..." from the File menu. When Blender saves the image, it uses the format specified on the Formats panel.
  20. RSD.03: The different image formats available for saving. The default image format is Jpeg, but, as Jpeg compression can leave ugly artifacts, you should probably change it to PNG, and set the Quality spinner to 100. With this menu, you can also choose from the animation formats appropriate to your computer (Quicktime, AVI codec), which will bring up your operating system's animation saving dialogue. If you want to save an image's Alpha channel along with the rest of the render, you need to select the "RGBA" button at the bottom of the panel, as well as an image format that supports Alpha channels (Targa, PNG, OpenEXR and MultiLayer). If you are rendering an animation and have chosen a still image format (PNG, Targa, Jpeg, etc.) instead of an animation format (.avi, Quicktime), Blender will save a series of numbered image files, one for each rendered frame. It is then up to you to put the images together into a playable animation, using either Blender or some other program. Animated image sequences are saved automatically to the folder specified in the top file selector of the Output tab.
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2