FUDforum
Fast Uncompromising Discussions. FUDforum will get your users talking.

Home » Imported messages » comp.lang.php » preg_match() oddities and question
Show: Today's Messages :: Polls :: Message Navigator
Return to the default flat view Create a new topic Submit Reply
Re: preg_match() oddities and question [message #176103 is a reply to message #176098] Wed, 23 November 2011 18:58 Go to previous messageGo to previous message
Peter H. Coffin is currently offline  Peter H. Coffin
Messages: 245
Registered: September 2010
Karma:
Senior Member
On Wed, 23 Nov 2011 19:01:14 +0100, Sandman wrote:
> In article <slrnjcpunb(dot)85q(dot)hellsop(at)nibelheim(dot)ninehells(dot)com>,
> "Peter H. Coffin" <hellsop(at)ninehells(dot)com> wrote:
>
>> On Wed, 23 Nov 2011 09:55:22 +0100, Sandman wrote:
>>
>>> Right, but your example is not a valid argument for that conclusion.
>>> My examples contained the variations of addresses that I wanted
>>> to match. Or are you saying that there is no way to use regular
>>> expressions to catch the examples I gave? Because I have a hard time
>>> believing that.
>>
>> Address-matching is a hard task. I did that for a decade professionally
>> (as part of a job, not the sole function), and it's not easy to do well
>> for even one postal system, and trying to write a generalized one is
>> basically impossible to manage in one lifetime. The best *simple* way
>> to manage it is to take a field, blow it out into individual words,
>> standardize all the words you can find without trying to sort out
>> what they are (which is the Very Hard part of that task), throw the
>> alphabetic ones into soundex or nysiis, make a loose match by a chunk of
>> postal code or city code or province, then pick the item(s) that have
>> the greatest number of matches between incoming and loose-match record
>> of the numeric and nysiis-encoded alphabetical elements. If you weight
>> things like "numeric match = 1, plaintext that's in a dictionary that
>> matches when nysiis = 2, nondictionary text that matches nysiis = 3",
>> and do that for NAME as well as ADDRESS, you get about as good as you
>> can get without buying someone else's work. And that's STILL a lot of
>> effort to write. Regexp alone for address matching is a snipe-hunt. It
>> looks obviously right and you can spend a lot of time playing with it,
>> but it ends up being a dead end.
>
> I thank you for your input, but I still maintain that my examples
> could be parsed by using a regular expression, and unless explicitly
> told so by using examples will I admit otherwise :-D

*grin* Any given (note: given) example set can be parsed with a
sufficiently complicated regexp. If your task is small enough and clean
enough, it might even not be THAT hard to accomplish. It's impossible to
provide advice about it, though, without having that complete example
set as well. The incoming data, however, is almost always going to
contain data that is not clean enough and will also probably end up
containing stuff that does not match your parsing rules, in a "because
fools are so ingenious" sense.

And, at that point, you'll want to be looking at how you handle those
exeptions: reject, pass, send for clerical review, and what those
categories mean for your process.

> No offense, though.

None to take.

--
58. If it becomes necessary to escape, I will never stop to pose
dramatically and toss off a one-liner.
--Peter Anspach's list of things to do as an Evil Overlord
[Message index]
 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: Amazing Website!!!
Next Topic: session handler auto log out
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ]

Current Time: Mon Nov 25 20:10:14 GMT 2024

Total time taken to generate the page: 0.08581 seconds