I have a list that contains many sentences. I want to iterate through the list, removing from all sentences words like "and", "the", "a", "are", etc.
I tried this:
def removearticles(text):
articles = {'a': '', 'an':'', 'and':'', 'the':''}
for i, j in articles.iteritems():
text = text.replace(i, j)
return text
As you can probably tell, however, this will remove "a" and "an" w开发者_如何学编程hen it appears in the middle of the word. I need to remove only the instances of the words when they are delimited by blank space, and not when they are within a word. What is the most efficient way of going about this?
I would go for regex, something like:
def removearticles(text):
re.sub('(\s+)(a|an|and|the)(\s+)', '\1\3', text)
or if you want to remove the leading whitespace as well:
def removearticles(text):
re.sub('\s+(a|an|and|the)(\s+)', '\2', text)
This looks more like an NLP job than something you would do with straight regex. I would check out NLTK (http://www.nltk.org/) IIRC it comes with a corpus full of filler words like the ones you're trying to get rid of.
def removearticles(text):
articles = {'a': '', 'an':'', 'and':'', 'the':''}
rest = []
for word in text.split():
if word not in articles:
rest.append(word)
return ' '.join(rest)
in
operator of dict run faster than list.
Try something along the lines of
articles = ['and', 'a']
newText = ''
for word in text.split(' '):
if word not in articles:
newText += word+' '
return newText[:-1]
It can be done using regex. Iterator through your strings or (''.join the list and send it as a string) to the following regex.
>>> import re
>>> rx = re.compile(r'\ban\b|\bthe\b|\band\b|\ba\b')
>>> rx.sub(' ','a line with lots of an the and a baad')
' line with lots of baad'
精彩评论