Clear functional boundaries for the Blender real-time engine
For the web plug-in, VR applications and others


Motivation

This small document gives an overview of what needs to be done to the current, as of May 22nd 2001, stand-alone game player. I came to these remarks while I was working on the web plug-in. Imo making clear boundaries for the various functions facilitates development of other products too.

In general, the various stages that can be determined in a (3D) graphics application need to be clearer. An important thing is that there's a need for a graphics context when initializing the data.

One needs to be able to issue a render command to generate one image, no matter if it's for a one window application or multi window application. This must be possible without interference with the other functionality of the real-time engine.

The remarks also apply to creating a Virtual Reality player that uses multiple screens (full screen windows) to display the virtual world, with or without stereo projection. Typically, the images for each screen are rendered at the same time so especially the render functions need to be isolated.

Since the engine source code is already very structured I think it doesn't have to cost much time to make this adjustment. Also, imo it doesn't influence other functions of the engine so there should be no reason for not doing it.


Different stages for rendering for one window

For rendering the image for one window, e.g. with OpenGL, you can distinguish various stages in the processing of the data. This is true for a web plug-in, as well as for a Virtual Reality player.

· Initialization, only at startup

- Data initialization, once per application. Prepare the whole database, no rendering context is necessary and thus no window has to be opened yet.
(done in glut before app starts, call this the application process)

- Rendering context initialization, once for each rendering context. Display lists are generated, textures are downloaded into texture memory, etc. The latter things can also be done later but the most important issue then is that this needs to be done for each rendering context.
(done in glut at first call of rendering callback function, call this the rendering process)


· For each frame

- Application specific update, like physics, collision detection, game logic, etc. No rendering context necessary, only once per frame.
(compare with glutIdleFunc() callback, application process)

- Rendering the current view for the window
(compare with glutDisplayFunc() callback, rendering process)


Rendering for high-end VR with multi processing

Usually, in a (high-end) virtual reality setup a powerful multi processor computer is used and the software is multi processing. Typically, one process is then 'fired' for each of the above described tasks. Therefore, it is necessary (actually, can't do without) to define separate processes for each task. For example, if we are driving a so-called PowerWall (a large projection screen with a couple of screen next to each other) with 3 screens, we need 4 processes: 1 for the application process and 1 for each render process (3 screens).


Multi processing is the future for the real-time engine! (imho)

The proposition that I mention does certainly not interfere with the use of the engine as it is done right now. Even better, I think that this restructuring makes the engine more powerful since it can take advantage of multi processor machines. With a background in supercomputing one can see a shift in getting more performance out of a computer system: nowadays, supercomputers are built from a lot of smaller units, each with one CPU, instead of trying to push one single CPU to a very high performance. So adapting the source code for multi processing only enhances the structure and code.

You should look beyond the regular PCs that are now very common. I can think of black box solutions for games and entertainment systems that can easily be built with these computer systems. IMO we will see a shift to multi processor systems for these systems as well, comparable to the shift in high-end supercomputing.

Nowadays, only a very limited number of applications use the capability of multi processing on PCs so that's probably why these machines are not sold that often, at least in the Netherlands. However, I can think of a number of simulation applications that can benefit from this. And of course games are simulations….

Proposition/Conclusion

The proposition that I would like to make it that the code is restructured to the scheme that is described above. Simple as that ;-)