Re: Record pagination with count of total rows [message #170438 is a reply to message #170429] |
Mon, 01 November 2010 15:08 |
Jerry Stuckle
Messages: 2598 Registered: September 2010
Karma:
|
Senior Member |
|
|
On 11/1/2010 9:58 AM, matt wrote:
> On Nov 1, 8:58 am, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>> On 11/1/2010 8:42 AM, matt wrote:
>>
>>
>>
>>> Hi all,
>>
>>> I'm struggling with a concept here, and I'd like to pick some brains.
>>> I've got a series of report pages with complicated queries that return
>>> row counts in the 10s of thousands. These are paginated, and after
>>> accepting search and sort criteria from the user, I'm displaying a
>>> header that looks like this:
>>
>>> Search contains 29,657 records. Records per page [select 25, 50, 100]
>>> Page [1] 2 3 ... 1186 1187 Next>>
>>
>>> To get record totals and then deduce page numbers, I'm pulling a full
>>> recordset from MySQL and using PHP to display the appropriate rows.
>>> The problem is that, in the example above, at least, that query takes
>>> about 70 seconds to run. If I were to limit the search criteria so
>>> that it only returns<100 rows, then it's lightning fast. We're using
>>> ndb-cluster, so JOINs really have a performance impact.
>>
>>> This leads me to believe I could take advantage of LIMIT, however, I
>>> don't see how I'd get my total row count anymore.
>>
>>> Usually these reports are one-to-one with a particular table, so I've
>>> entertained the idea of running a separate COUNT query against that
>>> table with no JOINs. This won't suit my purposes because in a lot of
>>> cases, the search criteria are against fields in JOINed tables.
>>
>>> I've also thought about using temporary tables that are periodically
>>> updated (think Oracle's static views.) The problem there is that a
>>> lot of times, our users will go directly from a data entry page to a
>>> report and expect it to reflect the new data that was just entered.
>>> If I use these "snapshot" tables, I would also need to implement a
>>> mechanism that refreshed them whenever data is inserted/updated/
>>> deleted elsewhere.
>>
>>> Finally, the last approach I've considered is what I call the
>>> "PeopleSoft" method. That is, if you want a report, you enter all of
>>> your criteria, choose a format (XLS, HTML, etc.) and the report goes
>>> into a queue. You don't wait for your results, but instead go to a
>>> process management page, which would likely be AJAX enabled to inform
>>> you when the report was built and ready for viewing. I think this
>>> would be the least convenient solution for our users.
>>
>>> I'm just curious to see how other people have addressed this problem
>>> and what you've found to be the up/down sides of your chosen solution.
>>
>>> And, yes, the DBA is actively investigating why ndb is running much
>>> slower than the same queries on InnoDB or MyISAM. That's a separate
>>> post. For the purposes of this discussion, I'm interested in software
>>> solutions.
>>
>>> Thanks,
>>> Matt
>>
>> My first question would be - who's going to look at "10's of thousands
>> of rows" - especially if they have to go through thousands of pages to
>> read them.
>
> Accountants.
None that I know. They might go through that much on paper, where they
can make notes (even that would be suspect). But not on web page.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstucklex(at)attglobal(dot)net
==================
|
|
|