And in for ( int i = 0; i < 30; i++ ), 30 is a number that I came up with. The emulation is too slow if you execute 1 opcode per one idle event, so I execute 30 opcodes per idle event. This is a big hack, but I haven't come up with any better way of timing using wxWidgets. If anyone knows a better way to do timing with wxWidgets, then I would be glad to hear about it.
I haven't used wxWidgets, but I looked into it some..
From what I could find on the idle events there's nothing to indicate that it's ran at a fixed interval of time. It could very well depend on where it's being ran, and you don't want that. Actually, what appears to be happening is that the idle event is being called constantly so long as there are no other events, which you also don't want to base timing off of.
Looking further I found this:
http://www.wxwidgets.org/manuals/stable/wx_wxupdateuievent.html
It appears you can setup an event which does go off at a given interval, which is exactly what you want.
30 is a pretty arbitrary number indeed, and in this case, a pretty small one. What you should do is pick a frame rate then set the timer around that. Something around 16 or 32Hz would be good, then you'd have say, 31-33ms intervals for the timer event. Here you would run N instructions - whatever N is will determine the clock speed. Chip8 doesn't have an actual clock speed, but something around 512KHz or higher should be far more than sufficient (especially considering it'd be executing one instruction per clock cycle, which was unheard of for processors of that day). For 32Hz updates, that'd mean 16384 instructions processed per screen update. You could use lower values, of course, but too low and the emulation will become slower or fail outright.
As far as I remember from chip8, the timing is more regulated by input than by video, so actual timing "accuracy" is probably not that important, but even vast inconsistency in screen updates might end up being noticeable.
By the way, since this thread is on the topic of timing again, I wanted to address something that I meant to almost a year ago but never did:
What bcrew1375 said is wrong, because: say that your PC is able to execute two times faster than your desired intructions per second, and as soon as the emulator finishes executing your desired instructions/sec, it will halt and wait until that second is passed, which will cause a desynchronization on timing.(The emulator will execute the instructions/sec under half of the second, which will cause the emulator to double the desired speed on by half a second, but the other half will just wait until the other second, and that's not what you want).
Secondly, 100 instructions per second is too slow, try 500~800.
Let's say you want to execute 1,000 instructions per second, then, after EACH instruction, you will have to wait until 1 millisecond has passed before executing the next instruction.
Time should be checked on every instruction(if you want it to be as accurate as specified), not every second.
I'm quoting this now because looking over things (including the most recent example) I see a lot of people falling into this dangerous line of thinking.
First, you really can't synchronize per instruction, for anything with realistic operating speeds. Precision and accuracy for timers is just too low, and the overhead to calling the timers alone would give you a big additional cost.
Second, there's no reason at all to do so. When emulating a machine, what's most important is first and foremost keeping the virtual time of the machine in order. What this means is that internal events should occur in the correct order relative to each other; everything is synchronized properly to an internal clock. For Chip8 this doesn't really apply because there aren't internal events (least of all interrupt causing ones), but for almost any real platform it would. Keeping virtual time consistent has nothing to do with keeping consistency with real time.
When it comes to emulating a machine, the only way a user can determine if it's running at correct speed or not is by observing that machine's external output. All machines are limited in how much external output they can display per unit time: by refresh rate (maybe per scanline), audio frequency, and so forth. Audio in particular is timed correctly by the hardware of the platform you emulate on, so the only thing that becomes an issue is minimizing latency, where you have a much bigger window than frequency (which is why audio buffers are larger than one sample). These operating intervals are on the order of Hz, not the KHz or MHz ranges that internal intervals may run at.
And even if the device's output rate wasn't the limiting factor human perception would be. As humans we certainly do not perceive things higher than in the Hz ranges, change something too rapidly and it blurs together (hence interleaving on displays and why pulse width modulation works for audio output), change timing slightly and people won't even be able to notice. So long as it catches up in the end (global time consistency) it really doesn't matter much.