NLP技术相对缺乏处理此类案文的能力。
Phrased differently: it is quite possible to build a solution which includes NLP processes to implement the desired classifier but the added complexity doesn t necessarily pays off in term of speed of development nor classifier precision improvements.
If one really insists on using NLP techniques, POS-tagging and its ability to identify nouns is the most obvious idea, but Chunking and access to WordNet or other lexical sources are other plausible uses of NLTK.
相反,根据简单的定期表述和一些犹豫不决,如NoBug所建议的那样,临时解决办法可能是解决这一问题的适当办法。 当然,这种解决办法有两个主要风险:
- over-fitting to the portion of the text reviewed/considered in building the rules
- possible messiness/complexity of the solution if too many rules and sub-rules are introduced.
对拟审议的案文的完整(或非常大的样本)进行一些固定式分析,将有助于指导选择少数犹豫不决,也避免过度适用的关切。 我确信,与习惯词典有关的相对较少的规则应当足以产生一个具有适当精确度和速度/资源性能的等级。
A few ideas:
- count all the words (and possibly all the bi-grams and tri-grams) in a sizable portion of the corpus a hand. This info can drive the design of the classifier by allowing to allocate the most effort and the most rigid rules to the most common patterns.
- manually introduce a short dictionary which associates the most popular words with:
- their POS function (mostly a binary matter here: i.e. nouns vs. modifiers and other non-nouns.
- their synonym root [if applicable]
- their class [if applicable]
- If the pattern holds for most of the input text, consider using the last word before the end of text or before the first comma as the main key to class selection.
If the pattern doesn t hold, just give more weight to the first and to the last word.
- consider a first pass where the text is re-written with the most common bi-grams replaced by a single word (even an artificial code word) which would be in the dictionary
- consider also replacing the most common typos or synonyms with their corresponding synonym root. Adding regularity in the input helps improve precision and also help making a few rules / a few entries in the dictionary have a big return on precision.
- for words not found in dictionary, assume that words which are mixed with numbers and/or preceded by numbers are modifiers, not nouns. Assume that the
- consider a two-tiers classification whereby inputs which cannot be plausibly assigned a class are put in the "manual pile" to prompt additional review which results in additional of rules and/or dictionary entries. After a few iterations the classifier should require less and less improvements and tweaks.
- look for non-obvious features. For example some corpora are made from a mix of sources but some of the sources, may include particular regularities which help identify the source and/or be applicable as classification hints. For example some sources may only contains say uppercase text (or text typically longer than 50 characters, or truncated words at the end etc.)
I m afraid this answer falls short of providing Python/NLTK snippets as a primer towards a solution, but frankly such simple NLTK-based approaches are likely to be disappointing at best. Also, we should have a much bigger sample set of the input text to guide the selection of plausible approaches, include ones that are based on NLTK or NLP techniques at large.