torehandy.blogg.se

Pygame image convert alpha
Pygame image convert alpha








pygame image convert alpha

Unfortunately, the saved images do not have the alpha channel, which means that they don't have transparency which means that the textures don't work. Changing the alpha value changes the brightness. The images are darker, because I am seeing the black (Surface.fill((0,0,0))) through the images. Now, what I see here is a window open and then a nice animation of the images playing across it. My solution is programming! I wrote a script which should convert all the images to be ~50% transparent: Normally, I would just add the transparency in an image editor, but unfortunately I have ~250 images. a framework that was GPU accelerated (instead of pygame which is over SDL 1.I'm running into random troubles with PyOpenGL transparency again, and the simplest solution is just to make the textures have transparency.Of course all this may be moot because if you really cared about performance, you would use Pygame can't tell whether you need alpha surfaces or not, so you need to specify yourself. If you need transparency, you can use convert_alpha(), which is marginally faster than unconverted but not nearly as fast as convert() - alpha blending is expensive. If all surfaces matched the display format by default, then every time you try to blit semi-transparent surfaces, or ones loaded from PNGs or GIFs with transparent pixels, you end up with images over a black rectangle. If you do some image processing at low bit depths, you might get results that you weren't expecting.Īnother issue is that your display format most likely doesn't have alpha. The user might specify 16-bit pixels and it might end up as some wacky format like BGR565. I don't know the considerations that went into this design, but one issue is that you don't know until run time what your display pixel format will be. Why don't surfaces match the display by default? Good question, as it's an obvious newbie trap. The recommendation is to do it as early as possible, preferably when you load or create your assets, or at least outside your game loop. You may not feel the difference on your octa-core development PC, but when you're on constrained devices like handhelds (or just bring up your CPU counter), it makes a big difference. If you don't call it, then every time you blit a surface to your display surface, a pixel conversion will be needed - this is a per pixel operation, very slow - instead of a series of memory copies. convert() usage are highly appreciated, thanks!Ĭonvert() is used to convert the pygame.Surface to the same pixel format as the one you use for final display, the same one created from _mode(). fill(), before, or it doesn't matter? Is once enough or should I "reconvert" after each draw? What if I load an image? Why isn't the so called "fastest pixel format" the default when creating new surfaces?Īnd if such surfaces would benefit from a convert, when should I do so? After a. convert() when creating any surface? Like surface = pygame.Surface(.).convert()Ī lot of pygame tutorials do that, even for a background that is just. Would any of these surfaces benefit from a.

pygame image convert alpha pygame image convert alpha

They are blitted a lot, as source or dest.

pygame image convert alpha

It has 3 surfaces: screen, background and ball. Screen = _mode((800, 600))īackground = pygame.Surface(screen.get_size()) It is a good idea to convert all Surfaces before they are blitted many times".Ĭonsider this simple ball-bouncing code: import pygame Official documentation is very vague about it: it says "fastest format for blitting. It's common sense that I should convert a surface after loading an image to it (presumably a jpg/png/etc file), but what about surfaces that I only use pygame's "primitives" like () or Surface.fill()? I'm a bit puzzled about pygame's nvert():










Pygame image convert alpha