Re: An extremely large hash lookup mechanism [message #172234 is a reply to message #172232] |
Mon, 07 February 2011 13:29 |
Erwin Moller
Messages: 228 Registered: September 2010
Karma:
|
Senior Member |
|
|
On 2/7/2011 1:06 PM, ram wrote:
> I have Mysql table with customerid (big int)& customer_unique_key
> (varchar )
>
> I have a php script that needs to upload customers into groups.
> The customer_unique_key will be uploaded and all the customerids
> should be entered to a new table.
>
> Initially My script was doing a per-record query to extract
> customerid and print.
> But this turned out to be toooo slow , since the upload file may have
> upto a million records.
>
> Now I modified the script to read all customer_unique_key ->
> customerid as key value pairs into an array
> This works fine and fast , but hogs the memory and crashes whenever
> the number of records crosses around 3-4 million.
>
>
> What is the best way I can implement a hash lookup ? Should I use a
> CDB library ?
A few general approaches:
1) Increase memory for PHP for this script.
2) Start working in batches. Simply cut your original file into smaller
parts and run then sequentially through your "inserter".
But it isn't clear to me what your problem is, hence the above advice. ;-)
Please describe in more detail how your sourcedata looks and how your
table looks. And also give an example of a single insert you want to do.
(Right now I don't understand why your array-approach at all, let alone
why it is faster.)
Regards,
Erwin Moller
--
"That which can be asserted without evidence, can be dismissed without
evidence."
-- Christopher Hitchens
|
|
|