Re: Windows binaries 64bit for PHP [message #177991] |
Wed, 09 May 2012 20:46 |
Daniel Pitts
Messages: 68 Registered: May 2012
Karma:
|
Member |
|
|
On 5/9/12 12:52 PM, Jerry Stuckle wrote:
> On 5/9/2012 2:30 PM, Daniel Pitts wrote:
>> On 5/9/12 9:34 AM, Peter H. Coffin wrote:
>>> On Wed, 09 May 2012 09:56:44 +0200, Erwin Moller wrote:
>>>> On 5/9/2012 4:29 AM, Peter H. Coffin wrote:
>>>> > On Tue, 08 May 2012 22:25:26 +0200, Michael Fesser wrote:
>>>> >> .oO(Jerry Stuckle)
>>>> >>
>>>> >>> On 5/7/2012 11:37 PM, Daniel Pitts wrote:
>>>> >>>> Exactly true, but if you scale to sizes you don't need, you
>>>> >>>> indeed use
>>>> >>>> more processor time! Our disk space is definitely not the
>>>> >>>> bottleneck.
>>>> >>>
>>>> >>> And if you repeatedly rescale the same image to the same size,
>>>> >>> you're
>>>> >>> using even more processor time!
>>>> >>
>>>> >> You missed the word 'caching'. You rescale when needed, and only
>>>> >> once.
>>>> >
>>>> > How is this different than pre-scaling the images?
>>>> >
>>>>
>>>> Hi Peter,
>>>>
>>>> It is different because they are *only* rescaled when not found.
>>>>
>>>> One approach I used:
>>>>
>>>> 1) Need image xyz_2012_march_nr12_300x500.jpg
>>>> (The 300x500 is dimensions needed.)
>>>> 2) Check if it exists.
>>>> If not: Create it out of original (xyz_2012_march_nr12.jpg in this
>>>> case)
>>>> and store it.
>>>>
>>>> One can easily wrap this functionality in a function.
>>>>
>>>> So the difference is that you don't need a batchjob that apparently
>>>> needs months and that will resize many images that are never needed, or
>>>> never needed on that size.
>>>> (I have my doubts about the alleged months, but that doesn't matter.)
>>>
>>> Still seems a waste to use webserver CPU to do something that's
>>> apparently time-critical and intensive to do what could be handled as
>>> part of a production build. I mean, don't you KNOW what size image
>>> you're going to need? Will you suddenly need
>>> xyz_2012_march_nr12_318x1260.jpg ? Or are you working around someone
>>> just forgetting to build something that IS known about and should be
>>> part of a documented process?
>>>
>>
>>
>> Our content producers upload images using our CMS and, they currently
>> get scaled to the "common" sizes. We currently have a catalog of 7.5
>> million images.
>>
>> Now, it happens every so often that one of our (many) design teams or
>> product teams decide that one of our (many) sites needs a redesign. When
>> that happens, sometimes they want a new size (or have some specific
>> cropping requirements, etc...).
>>
>> If it takes 1 second to reprocess an image (and it averages much higher
>> than that), it would take 12 days to reprocess our entire catalog.
>>
>> We are not a little shop working on a site for a single client. We are a
>> large media company. Some things that work with small to medium sized
>> sites do not work at all when you're dealing with millions of images.
>>
>> The point is, if you don't need dynamic resizing, good for you, don't go
>> to the trouble. Some of us on the other hand work on a different scale.
>
> You need to get rid of that TRS-80 and get a decent machine. It
> shouldn't take anywhere near a second to resize an image.
Our source images are often extremely high resolution, so part of the
processing time is IO bound. I haven't actually worked on the image
processing codebase, and I do suspect there are some efficiencies that
could be gained somewhere, but the fact remains that at our scale,
dynamic resizing is a better solution.
|
|
|