What's new

Glide64 vs GCC. Need help

H

h4tred

Guest
Yay! :D

Now we need it to load....I finally got a full Linux dual boot setup up and running, complete with Wine and Code::Blocks for Linux. Plus the newest Mupen64Plus for testing reasons....
 
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
So far I can't load it. This Mac does not belong to me, and it does not have GTK installed because of conflicts with mc. I can't run Mupen without GTK.
I asked the owner to fix it, but I will take time.
In any case, it's too early to get excited. I'm fighting with Linux port for a few days, and it already fuc*d me to death.
 
H

h4tred

Guest
Well, if it makes you happy....finally fixed and commited the Unix/Mac VRAM detection.

It outputs the same expected output as the Windows version. Expects a HW accelerated rendering context though when detecting, since thats how the video RAM detection in SDL works.
 

Slougi

New member
SDL on mac is a little weird.

In the SDL headers there is a line like this:
Code:
#define main SDLMain

SDL defines its own main function, that then calls your main, which got its name redefined as SDLMain. You do not need that for plugins, don't link to SDLMain.
 
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
MacPort: plugin loaded, GUI works, but when I open config dialog, message window with Mupen logo appears: "Error, Failed to read config options". Moreover, my plugin can't read its ini file. I tried everything. Without ini it can't work properly. Are there any tricks with config load on Mac port?

PS At least my code does not crash in weird places, as on Linux.
 
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
It seems that crashes are caused by bugs in g++ compiler. Current version is 4:4.3.3. I switched to 4.2 – the result became much more expected. Now I met the problem, which is outside Glide64. Details:
main_gtk.c, 83

void gui_init(int* argc, char*** argv)
{
/* Initialize multi-threading support. */
g_thread_init(NULL);

This call causes “GThread-ERROR **: GThread system may only be initialized once.
aborting...”

gui_init is called after plugins load. Glide64 calls wxWidgets initialization on load. wxWidgets use GTK. I guess, threads are already initialized by wxWidgets, which caused this error. How it can be resolved?

Edit:
Solution:
void gui_init(int* argc, char*** argv)
{
/* Initialize multi-threading support. */
if (!g_thread_get_initialized())
g_thread_init(NULL);

Please fix it in official Mupen64Plus sources too.
 
Last edited:
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
Report: Linux port - GUI is fully functional. System hangs on rom load – reboot needed. Need mudlord’s help.

MacOs port: GUI itself works. Can’t load ini file. Hangs on rom load. Crash on exit.
 

Slougi

New member
It seems that crashes are caused by bugs in g++ compiler. Current version is 4:4.3.3. I switched to 4.2 – the result became much more expected. Now I met the problem, which is outside Glide64. Details:
main_gtk.c, 83

void gui_init(int* argc, char*** argv)
{
/* Initialize multi-threading support. */
g_thread_init(NULL);

This call causes “GThread-ERROR **: GThread system may only be initialized once.
aborting...”

gui_init is called after plugins load. Glide64 calls wxWidgets initialization on load. wxWidgets use GTK. I guess, threads are already initialized by wxWidgets, which caused this error. How it can be resolved?

Edit:
Solution:
void gui_init(int* argc, char*** argv)
{
/* Initialize multi-threading support. */
if (!g_thread_get_initialized())
g_thread_init(NULL);

Please fix it in official Mupen64Plus sources too.

Seems like a simple work-around. Does the same thing happen with the qt4 gui?
 
H

h4tred

Guest
The issue is very simple: a means to detect VRAM size via SDL.

The info from those methods currently return 0 under linux. It should however output the VRAM size. That could account for the crash, as the wrapper needs that information for memory allocation purposes.

Or, as I was suggesting, make the user select how much RAM thier video card has. According to the page I linked, the VRAM detection methods are very specific in how they operate and certain conditions must be fufilled for SDL VRAM detection not to return 0. Hence why I brought it up.
 

Slougi

New member
The issue is very simple: a means to detect VRAM size via SDL.

The info from those methods currently return 0 under linux. It should however output the VRAM size. That could account for the crash, as the wrapper needs that information for memory allocation purposes.
Hmm, what exactly do you do with the vram info? You never get to use the full amount anyway on modern desktops. Maybe the easiest way is to assume a certain amount, and let the user increase it if there is an advantage to this.

Or, as I was suggesting, make the user select how much RAM thier video card has. According to the page I linked, the VRAM detection methods are very specific in how they operate and certain conditions must be fufilled for SDL VRAM detection not to return 0. Hence why I brought it up.

The link was talking about 2D surfaces. Because X is fundamentally a network protocol, direct access to VRAM cannot be presumed. There is a deprecated extension to X called DGA that provides direct access to the framebuffer for applications that run on the same machine as the X server. As I said though, the extension is deprecated and will soon be removed. AFAIK DGA never worked in conjunction with 3D graphics anyway.
 
H

h4tred

Guest
Hmm, what exactly do you do with the vram info? You never get to use the full amount anyway on modern desktops. Maybe the easiest way is to assume a certain amount, and let the user increase it if there is an advantage to this.

Basically, the wrapper tries to emulate a Voodoo 5. This includes emulating how the texture units work. The VRAM size is used along with other things to calculate the size of a emulated TMU. So, its part of the wrapper's basic design.

The idea behind the automatic VRAM detection was to allow the end user not to select the VRAM usage themselves for conveniance. I guess now we can just get them to set a personal buffer size, since the wrapper works well on cards with 64MB VRAM and up.

The link was talking about 2D surfaces. Because X is fundamentally a network protocol, direct access to VRAM cannot be presumed. There is a deprecated extension to X called DGA that provides direct access to the framebuffer for applications that run on the same machine as the X server. As I said though, the extension is deprecated and will soon be removed. AFAIK DGA never worked in conjunction with 3D graphics anyway.

Ah thanks, no wonder the function's returning null. :(
 
Last edited:

Slougi

New member
Basically, the wrapper tries to emulate a Voodoo 5. This includes emulating how the texture units work. The VRAM size is used along with other things to calculate the size of a emulated TMU. So, its part of the wrapper's basic design.

The idea behind the automatic VRAM detection was to allow the end user not to select the VRAM usage themselves for conveniance. I guess now we can just get them to set a personal buffer size, since the wrapper works well on cards with 64MB VRAM and up.

Right. If I may, I'd suggest just defaulting to 64MB and let the user raise it if there is a need.

Keep in mind that on composited desktops with 3D effects, a large chunk of the VRAM is in use anyway to hold window contents for compositing. This is probably true on Windows >= Vista and OS X as well.
 
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
Seems like a simple work-around. Does the same thing happen with the qt4 gui?
Work-around? It is common practice to check, is something already initialized before initialize it. Did not try qt4 gui.
 
Last edited:
OP
Gonetz

Gonetz

Plugin Developer (GlideN64)
That seems unlikely. What kind of code was causing the problems?
Who knows? When I compile my sources with current version, it works weird. When I switch to 4.2 it works as expected. The same sources, the same compiler options in the same makefile.
 

Top