开发者

How do I use ASCIIFoldingFilter in my Lucene app?

开发者 https://www.devze.com 2023-01-09 19:10 出处:网络
I have a standard Lucene app which searches from an index.My index contains a lot of french terms and I\'d like to use the ASCIIFoldingFilter.

I have a standard Lucene app which searches from an index. My index contains a lot of french terms and I'd like to use the ASCIIFoldingFilter.

I've done a lot of searching and I have no idea how to use it. The constructor takes a TokenStream object, do I call the metho开发者_JS百科d on the analyzer that retrieves a TokenStream when you send it a field? Then what do I do? Can someone point me to an example where a TokenFilter is being used? Thanks.


The token filters - like the ASCIIFoldingFilter - are at their base a TokenStream, so they are something that the Analyzer returns mainly by use of the following method:

public abstract TokenStream tokenStream(String fieldName, Reader reader);

As you have noticed, the filters take a TokenStream as an input. They act like wrappers or, more correctly said, like decorators to their input. That means they enhance the behavior of the contained TokenStream, performing both their operation and the operation of the contained input.

You can find an explanation here. It is not directly refering to an ASCIIFoldingFilter but the same principle applies. Basically, you create a custom Analyzer with something like this in it (stripped down example):

public class CustomAnalyzer extends Analyzer {
  // other content omitted
  // ...
  public TokenStream tokenStream(String fieldName, Reader reader) {
    TokenStream result = new StandardTokenizer(reader);
    result = new StandardFilter(result);
    result = new LowerCaseFilter(result);
    // etc etc ...
    result = new StopFilter(result, yourSetOfStopWords);
    result = new ASCIIFoldingFilter(result);
    return result;
  }
  // ...
}

Both the TokenFilter and the Tokenizer are subclasses of TokenStream.

Remember also that you must make use of the same custom analyzer both in indexing and searching or you might get incorrect results in your queries.


The structure of the Analyzer abstract class seems to have been changed over the years. The method tokenStream is set to final in the current release (v4.9.0). The following class should do the work:

// Accent insensitive analyzer
public class AccentInsensitiveAnalyzer extends StopwordAnalyzerBase {
    public AccentInsensitiveAnalyzer(Version matchVersion){
        super(matchVersion, StandardAnalyzer.STOP_WORDS_SET);
    }

    @Override
    protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
        final Tokenizer source = new StandardTokenizer(matchVersion, reader);

        TokenStream tokenStream = source;
        tokenStream = new StandardFilter(matchVersion, tokenStream);
        tokenStream = new LowerCaseFilter(tokenStream);
        tokenStream = new StopFilter(matchVersion, tokenStream, getStopwordSet());
        tokenStream = new ASCIIFoldingFilter(tokenStream);
        return new TokenStreamComponents(source, tokenStream);
    }
}
0

精彩评论

暂无评论...
验证码 换一张
取 消