FUDforum
Fast Uncompromising Discussions. FUDforum will get your users talking.

Home » Imported messages » comp.lang.php » Record pagination with count of total rows
Show: Today's Messages :: Polls :: Message Navigator
Return to the default flat view Create a new topic Submit Reply
Re: Record pagination with count of total rows [message #170505 is a reply to message #170503] Wed, 03 November 2010 14:26 Go to previous messageGo to previous message
Matthew Leonhardt is currently offline  Matthew Leonhardt
Messages: 9
Registered: November 2010
Karma:
Junior Member
"Peter H. Coffin" <hellsop(at)ninehells(dot)com> wrote in message
news:slrnid2pkh(dot)1g0(dot)hellsop(at)abyss(dot)ninehells(dot)com...
> On Wed, 3 Nov 2010 08:14:00 -0400, Matthew Leonhardt wrote:
>>
>> "Norman Peelman" <npeelman(at)cfl(dot)rr(dot)com> wrote in message
>> news:iaqvjv$l1s$1(at)news(dot)eternal-september(dot)org...
>>
>> [snip]
>>
>>> Other options to consider:
>>>
>>> MySQL - make sure that the result (query) buffer is sufficiently large
>>> enough to hold many such result sets so that the full scan queries run
>>> once and your paginated queries pull from the buffer as MySQL doesn't go
>>> back to disk if the result set is already buffered (unless something has
>>> changed). If something has changed MySQL should go back to disk for the
>>> result set - this should be handled by MySQL automatically.
>>> Using 'LIMIT offset, numrows' during pagination should dramatically
>>> reduce query time when the full result set is already in the buffer.
>>
>> Like I said, DB server configuration is being explored in parallel. I
>> don't
>> want this discussion to get off topic.
>
> I can understand that, but you've got one problem, not a problem of
> code, a problem of db layout, and a problem of db server config that are
> separate. What you do in one may/will affect what you do for the others.

I'll grant you that--I also don't want to anger the c.l.p topicality police
:)

>>> PHP - use APC (Alternative PHP Cache) to store result sets in shared
>>> memory with reasonable time to live. Probably about the same as using a
>>> session variable but faster/less resources when multiple people doing
>>> same
>>> query (how often would that happen?). This could allow (with the proper
>>> logic) users to store/bookmark queries and come back to them without
>>> hitting the database again (good for queries of data that may not
>>> change).
>>
>> We have our own caching mechanism in place, however, I'll check out APC.
>> I
>> doubt this will help us much, however. Our permissions are very granular
>> and there are rarely two users that have the same permissions (and would
>> see
>> the same data returned from the query).
>
> This smells like trouble too. At least such that the security that it
> may afford is likely to adversely affect performance. But that'd require
> a lot of detail about precisely what you're limiting from whom and with
> which and by how. Mostly, people are unwilling to share that kind of
> information.

Basically, we've added three fields to each primary table that we search
against. These fields are fk's to our permission table. Users carry a data
structure enveloping these permission keys in their session as well, so a
typical query looks like this:

SELECT ... FROM ...
WHERE fkPermission_area IN (...)
AND fkPermission_dept IN (...)
AND fkPermission_bus IN (...)

Where permissions are divided into three categories: geographical area,
department, and business type.

Without getting into the specifics of our busines, a hypothetical query
(with values substituted for integer keys...) would look like this:

WHERE area IN ('Nevada', 'New Mexico')
AND dept IN ('Sales', 'Purchasing')
AND bus IN ('Retail')


>> For this reason, also, MySQL's buffers and cache don't do too much
>> good with multiple-user scenarios--the portion of the WHERE clause
>> that filters results by user permissions is almost guaranteed to be
>> different with each user.
>
> It depends how and where you've flattened the data, and how it's
> indexed.

Right now we're basically fully normalized and well indexed.

> [hypothetical] If your access is determined by a record code
> that links to a table of values that are used as a bitwise array to
> check against specific flags in a user-record bitwise array, then yes,
> it's going to be horrible. If you're using a float-value to describe the
> "sensitivity" of a record and comparing that to another float value,
> that's going to suck too. If you've got a massive intersection table
> between your data and your user records, that caches poorly if the whole
> intersection table doesn't fit in the cache and you've got multiple
> users fussing over the table at the same time, but may work just fine if
> there's usually only one person grovelling over the list of records at a
> time.

These are not representative of our situation, however...

> Similarly if you've got nearly unique views over the data for each
> user, limiting which records they can see, and also have more than one
> working at the same time.

This is something I'm actually considering:

1. User requests a report
2. If not exists, flattened memory table gets created just for that user
3. Check validity of temp table data (set by insert/update/delete logic)
4. If temp table data still valid, feed results from there
5. If temp table data not valid, drop temp table, go to step 2.

> Some of those scenarios may sound silly, but
> every one of them is a highly-granular permissions scheme that's been
> proposed to me at one point or another in the past. ... And, of course,
> there's the factor that most of the time, tracing back the development
> of the very granular user permissions ended up showing that the problem
> and requirements were very different from the scheme under
> consideration.

We've been building apps in this suite for a couple of years now. We're
pretty solid on permission scheme requirements. We've finally settled on
something that offers the flexibility that we need. The drawback of course
is that few queries are fed out of cache, although I don't have hard numbers
on query_cache_hits and Com_select offhand.
[Message index]
 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: Could not save the file /var/www/html/rite.php.
Next Topic: Free Web Space for our Experiments
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ]

Current Time: Mon Nov 25 02:27:04 GMT 2024

Total time taken to generate the page: 0.03789 seconds