Re: Best strategy for creating an application scoped variable? [message #172054 is a reply to message #172051] |
Fri, 28 January 2011 15:32 |
Jerry Stuckle
Messages: 2598 Registered: September 2010
Karma:
|
Senior Member |
|
|
On 1/28/2011 10:17 AM, laredotornado(at)zipmail(dot)com wrote:
> On Jan 28, 8:00 am, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>> On 1/28/2011 2:30 AM, Denis McMahon wrote:
>>
>>
>>
>>> On 28/01/11 03:24, Jerry Stuckle wrote:
>>>> On 1/27/2011 9:40 PM, Peter H. Coffin wrote:
>>>> > On Thu, 27 Jan 2011 16:08:01 -0800 (PST), laredotorn...@zipmail.com
>>>> > wrote:
>>
>>>> >> I'm using PHP 5.2. I would like to populate a hash from a database
>>>> >> query. The hash should be available to all users of the application
>>>> >> and would only be updated very occasionally. The database query is
>>>> >> expensive, and I would prefer only to run it once, whenever PHP was
>>>> >> restarted, or on the rare occasion when the database data changed.
>>>> >> What is the best strategy for implementing this hash?
>>
>>>> > I'm confused. What "hash" do you want to "populate"? Do you just want to
>>>> > stick a pregenerated value in a table? Maybe you need an insert/update
>>>> > trigger?
>>
>>>> That was my thought - create a table in the database with the required
>>>> information and update it based on a trigger. Much easier than trying
>>>> to use shared memory or the like.
>>
>>>> I guess an alternative would be to create a PHP file from the generated
>>>> data and include it where necessary. But there's always the problem of
>>>> updating it when the web server restarts (PHP doesn't "restart" - it
>>>> starts every time a request is made for a PHP file - and only then).
>>
>>> I guess the included file could be created with a cron job, say every 6
>>> hours or so?
>>
>>> To try and minimise file access conflicts, it might be best to create it
>>> with a temporary name and then using a shell "mv temp_file actual_file"
>>> at the end of the cron job.
>>
>>> However, I'd have thought that copying the query results into a new
>>> table would be the best answer, I guess it would be a static snapshot of
>>> the expensive query, and then access it as "select * from<table>",
>>> maybe running a cron process to generate it every 6 / 12 / 24 / whatever
>>> hours.
>>
>>> Or create an extra table with a time field that you update when you run
>>> the query, and check this field every time you access the data, if the
>>> data is older than some defined limit, call the expensive query to
>>> update the snapshot table.
>>
>>> Rgds
>>
>>> Denis McMahon
>>
>> I wouldn't run a cron job. I would use the database tools to run the
>> query as necessary.
>>
>> And there are several ways to protect the file, if you do write to a
>> file. For instance, lock the file before writing and before including.
>> But I think creating a table with the results would be much better.
>>
>
> Actually, I really like the idea of creating the file with the
> database data already written to it, provided including that file
> would be faster than making a call to the database for every page
> request. Thanks for all the ideas, - Dave
Not necessarily. If the data are required that often, chances are the
results will be in the database cache and could be faster than accessing
from the file system. It is a complete mistake to think that any file
system call is faster than any database call.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstucklex(at)attglobal(dot)net
==================
|
|
|