Jump to content
GreenSock

Search In
  • More options...
Find results that contain...
Find results in...

Christoph Erdmann

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Christoph Erdmann

  1. That's nice, thanks! Are you thinking of a process where all images are uploaded and all images are compressed with the same settings? Or should every picture be presented to you? Yes, a better preview has been a topic for quite some time, and I simply don't know how to do it best. Moving the image and zooming is good at squoosh.app, but I don't like this comparison slider. It's rather good to hide the fact that two images are different after all.
  2. Batch processing is a difficult subject for me. I built the tool to get the most out of the images. Batch processing contradicts this in that it can't matter how big the compressed images are. And then you can use one of the countless other tools. But maybe you mean it differently... ?
  3. It's been a few days. But you are the ones I wrote this image compression tool for. So I'd like to know if you're still happy with it? Are all features understandable? Do you have better results with other tools? I'm always happy about your feedback.
  4. This div is at the top. Why do you have to scroll? Could you provide a screenshot?
  5. What a nice feedback, thanks. If you have any ideas to improve it, please let me know.
  6. I've used PNG8 for the masks some time ago but most of the time the PNG was a little larger than the JPG. And you need two requests. So I decided to go with a JPG mask. I've also tried mask quality settings but removed it. If you really want a smaller JPG mask you would have to do two requests. That adds HTTP header data. "Selective quality" seems not to be the solution for this. If you use quality settings that are not similar to each other the JPG algorithm uses different patterns from the 8*8 table. And then you get a glow around the JPG because the background color of the source image is not masked perfectly anymore. In my experience gzipped base64 images are a little bit larger than the original images. But Chrome als counts the HTTP header data for requests that is missing in the right part of your screenshot. Maybe that's the reason.
  7. My new article "Finally understanding PNG" is now online. It also explains the "predator view": http://compress-or-die.com/Understanding-PNG Just a warning. English is not my native language and thus it is possible you are stumbling over some quirks. My english speaking colleague is proofreading this article at the moment, but I couldn't wait and would be glad to get your feedback.
  8. I added an explanation for the PNG "predator view" aka "compression view". Hope it is more clear now what it does: http://compress-or-die.com/png
  9. That was mean. I've implemented your code in compress-or-die and it was slower than the 8bit code. I've created a fiddle to show it to you. But in this fiddle the 32bit code was faster. But I got it: The difference is that I've inlined the code in compress-or-die within the onload attribute of the img tag. That made the 8bit code a little bit slower, but the 32bit code a lot slower than all other variants! So you only get the performance boost of the 32bit code if you define a function (an IIFE seems not to be enough) otherwise the situation switches completely. Here is the fiddle: https://jsfiddle.net/McSodbrenner/gtv3earr/ I removed the "i" of the img tags to deactivate the corresponding code. So only one image tag should be correct at the same time to test it. But it's just a very small performance boost. Most of the time is getting lost at "getImageData()" which is needed anyway.
  10. Btw.: Tried UInt32Array with direct pixel manipulation and did not see any improvements in run time. Maybe chrome's V8 optimizes this internally.
  11. Jackets: This is actually the perfect case for transparent JPGs. jacket0.png gets a file size of 205kB instead of 2637kB with the default quality settings. But I suppose you have to say good bye to your texture packer then. Think it would be worth it. But you have to keep an eye on the decompression time (you know, the typed array is on my list). I also don't see notable differences using 8 bit PNGs (908kB instead of 2637kB), but I think this won't work for all your products. Spine: Just use 8 bit PNG and set a color amount that pleases you (48kB instead of 212kB width 256 colors). Take a look at the compression view. There are stains I've marked with red rectangles: Those stains take space but shouldn't be there, correct? The compression view is really useful to identify dirty transparency. Dungeon: Just use 8 bit PNG and set a color amount that pleases you. If you need more colors that 256 I would break the sheets apart und consolidate images with similar colors. You can use the compression view to check the correctness of your sheets: In area 1 the PNG uses much file size for compressing the ponds of lava. So I think this is an animation. If it's not there is a opportunity for improvement. In area 2 the ponds are duplicates. Fine. In area 3 is a pond. In area 4 is a different pond, why? Copy the blue area from area 3 and you will save space. These are just some examples how to use the compression view. Does that help you?
  12. ... except the funky sprite generation part. Think we are highjacking this thread.
  13. If you want to rotate an asset you don't use this technique of course. It's just another technique in your toolbox for standard animations with position and opacity
  14. Yes, the images are just a little bit bigger than the alternatives because they are most of the time PNGs. And of course this is not the holy grail for every type of campaign. But for standard campaigns with many size adaptions it's working well.
  15. I think a popular method is to export your assets from Photoshop including the whitespace. So all your assets have the same size as your stage and are positioned at top: 0px and left: 0px. Most simple animations will work and with a bit of luck you just have to export the new assets for the size adaptions.
  16. Hope the selective mask was not too hard to find. I am a little unsure about the UX. It is such an important feature. As so often the answer is... it depends. But just post your uncompressed sprite sheet here. I feel like having a look at it (hope this phrase is correct). And once again thanks for the links. I plan to address this subject next week. At the moment I am working on a javascript JPG Blocking artifact deblocker which is my priority.
  17. Yes, always needed if the javascript domain and the image domain are different. If you are building banners it makes sense to add this everytime because there are ad servers that put the assets on a different domain than the index.html file. Maybe I should point it out. No sorry, there won't be a binary. And yes, as OSUblake stated, it works as long as javascript is available. Oh, nice one, thanks for this. I will take a look into it and will try to extend my code.
  18. For the whole tool I am trying to use only the best compressors available, so for 8-bit PNGs it's pngquant (also for me the best choice). That's the reason why the conversion takes a little bit time on big images. For some things I coded my own stuff (e.g. JPGs with transparency and JPGs with "Selective Quality"). At the moment I am in a JPG research phase so there should come some little improvements soon.
  19. Hi, I just want to introduce Compress-Or-Die which is an online compression tool especially created for the creators of banners... so I hope for the most of you. It isn't a tool like tinyjpg or jpegmini that just allows to shrink existing JPGs a little bit. It's the one that creates your (also low quality) images from your original data and really squeezes out the last byte. And allows things like JPGs with transparency and "Selective quality" (as known from Adobe Fireworks) btw... Take a look at it here: http://compress-or-die.com/ In this context these articles could be interesting that explain a lot of the options you can set: http://compress-or-die.com/Understanding-JPG http://compress-or-die.com/Understanding-PNG I am the author of the tool and the article. So if you have questions, wishes or something else, just drop me a line. Thanks, Christoph
  20. Thanks for the nice explanation and the example. onUpdate should be what I've been searching for.
  21. 1. So, if I use my own RAF it could be the case, that my render frame is skipped and GSAP calculated animations unnecessarily. Or GSAP execution was skipped and my RAF renders the same data a second time, correct? 2. I think I understood but especially in the banner industry you usually have so simple animations that you very often completely rely on GSAP. There it is a little bit counterproductive if the ticks are triggered if nothing changes. In my case the three.js banner is also rendered when nothing changes or when the animation is complete. Now imagine we have multiple banners on a page which are rendered all the time regardless of something being animated. That could stress the CPU or GPU a lot. I fixed it a little bit by doing this (that stops rendering at the end of the main timeline): if (main_timeline.isActive()) { renderer.render(scene, camera); } But the ticks are further triggered on pauses between the single scenes. Would be nice to have the rendering stopped then. Is there a way to do this? What about an alternative tick event that takes this into account? What do you think?
  22. Hi, maybe it's my wrong expectation. I'am using GSAP with three.js this way: function update() { console.log('Update executed'); renderer.render(scene, camera); } TweenLite.ticker.addEventListener("tick", update); In GSAP I have a main timeline with some nested subtimelines. My expectation is that update() should only be called if a visible animation is running. So if no element is animated update() should not be called. But it is always called. It even keeps running if the main timeline is at its end and the animation is over. Is this the correct behaviour? So why should I use the ticker when a simple var raf = function() { requestAnimationFrame(raf); update(); } raf(); does its job? Is it just to have the ability to set the FPS or am I missing something? Thanks, Christoph
  23. This is "preloading" not "polite loading", isn't it? Don't know what should be polite if you preload your assets. I've read it here: http://support.adform.com/documentation/build-html5-banners/html5-banner-formats/polite-load-ad/ (see "Polite banner from one file") and https://support.google.com/richmedia/answer/2672514?hl=en (see "Set up polite loading with JavaScript") Both are using custom events. And DoubleClick allows 200kb instantly loaded and additionally 300kb politely loaded as recommended by the IAB: https://www.iab.com/guidelines/rich-media-guidance/ I still have the feeling not to understand a litte thing...
×