Hi !
On specific points:
- GL_BGR can generate a few issues, mainly about portability. I'm also not sure that this format will be the most optimized in GL drivers. On the other hand, the BGR->RGB is probably almost costless, even on low end (embedded included) hardware. We can time it, to be sure of that.
- Yes, we force texture *internal* format to RGBA even when it's not needed. The reason given in the comments is now probably obsolete, ATI's drivers were probably fixed a long time ago, now.
- this GL_RGBA value is the "GPU internal storage mode", so there's no trouble with malloc'ed CPU memory, that is still RGB. The reason why texsize value is updated is that it's used at the end of the function to estimate (roughly) the amount of memory needed for this texture in the hardware, so we take count of this extra byte. To be exact, this incrementation (line 320) should be *inside* the ifndef, not after (it makes the estimation wrong on iPhone target).
On a more global aspect about texture loading speed, I'm pretty sure that optimizing this function, even with a lot of efforts will give almost no results at all. Most of the consumed time is on I/O, since there's a huge amount of data to transfer from the disk to the RAM. The only way, I think, to improve performances ... is to use another (compressed) texture format. I don't see a lot of candidates: JPG, for known reasons is a no-go for me
, PNG does not compress that much (should make a test, anyway, to compare file sizes of RLE encoded TGAs with PNGs), the only other potential option is the DDS format, that supports some hardware compliant compression formats like S3TC (that Raydium supports too). But it will not be suited for every use (normal maps does not correctly support any type of destructive compression, for instance), and adds a new dependency to the engine
To make long story short, It may be me, but I don't see any easy way to reduce the disk-to-RAM amount of data during a texture load ... Any idea ?