What's new

Rendering

RMM

New member
I don't know the details of what's responsible for how rendering games is accomplished with most N64 emulators (or any for that matter), however, I'm familiar with rendering concepts in general, and I've thought over what's potentially plausible, though I'm stuck at knowing what the case is for a number of questions.

Basically, I'm curious to see it possible that model data could be brought into an N64 game as it's played, similar to how plugins have made it possible to retexture a game.

However, I'd like to know, when it comes down to it, how is the rendering taking place? Does the emulator build the 3D world itself, based on how the game is saying it should be, or does the game handle the rendering itself with everything else, and the emulator wraps the native instructions into something that gets it working for whatever the given platform (ie. x86 PCs) and simply displays what's in the game's video memory to screen?

How does the Rice Video plugin operate, or any other for that matter? Does it have some level of knowing the position, orientation etc of a model during runtime, and retextures the models in the game as they're being rendered in the emulator's software, or even if the game is rendering, that they're being 'layered' atop the game's rendering job?

My point with the video plugins being, there seems to be some knowledge as to what's possible with altering the games as they run, but I would like to know what they're doing; altering the game's rendered work, or the emulator's rendered work?

If the game is rendering, then I'd imagine the possibility of altering what models are being drawn to be difficult at best, however, if the emulators are rendering the games, swapping models and textures with whatever the user has locally for the emulator to use should be much easier, and without any effect how the game continues to operate (albeit with collision models remaining the same in the game).
 
Last edited:

Gonetz

Plugin Developer (GlideN64)
Rendering process is very low-level. Video plugin processes a display list, which basically consist of vertices loading commands, matrices loading commands, textures loading commands, pixel pipeline settings and render polygons commands. Plugin loads vertices and calculates their screen and texture coordinates, z, color, fog. Then, it loads textures, sets rendering modes and executes draw polygon commands, which use already calculated vertices. Basically, that's all. No logical 'models' exist on this level, so you may forget about models replacement. If there would be a way to determine that this particular set of vertices forms a 'model' and that it could be replaced by another (user) set, it would be already done.
 

Azimer

Emulator Developer
Moderator
Rendering process is very low-level. Video plugin processes a display list, which basically consist of vertices loading commands, matrices loading commands, textures loading commands, pixel pipeline settings and render polygons commands. Plugin loads vertices and calculates their screen and texture coordinates, z, color, fog. Then, it loads textures, sets rendering modes and executes draw polygon commands, which use already calculated vertices. Basically, that's all. No logical 'models' exist on this level, so you may forget about models replacement. If there would be a way to determine that this particular set of vertices forms a 'model' and that it could be replaced by another (user) set, it would be already done.

Good answer. The same reason there aren't sound replacements without hacking the image itself.
 
OP
R

RMM

New member
Thanks for that, Gonetz. That's what I feared, however it's good to hear it explained like that, and understand more of how it works under the hood.
 

Top