Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this questionSorry for bad English :(
Suppose i can preliminary organize recipes and ingredients data in any way.
How can i effectively conduct search of recipes by user-provided ingredients, preferably sorted by max match - so, first going recipes that use maximum of provided ingridien开发者_JAVA技巧ts and do not contain any other ingrs, after them recipes that uses less of provided set and still not any other ingrs, after them recipes with minimum additional requirements and so on?All i can think about is represent recipe ingridients like bitmasks, and compare required bitmask with all recipes, but it is obviously a bad way to go.
And related things like Levenstein distance i don't see how to use here.
I believe it should be quite common task...
It sounds like you're talking about sets - "available ingredients" is one set, and you want to find all recipes whose ingredients form a subset of that, ordered by size. Sets are efficiently implemented as balanced trees or hashtables.
It gets a bit more complicated when you want to address different amounts of ingredients.
Edit: If your recipe data is stored in an SQL database, it should actually be possible to do the the whole thing efficiently as an SQL query (which will use hastables and trees internally). But it's going to be a pretty complex query; better ask someone who's better at SQL than me (and of course your actual table structre is necessary).
Actually, I would use a tool like Lucene as it already knows how to do more or less what you need. Your ingredients would be keywords in the Lucene index and the recipes would be the documents. You could then search against the Lucene index and it would give you all the matching recipes and can even tell you confidence level.
Lucene is open source with implementations for many languages including .NET, Java, PHP and many others. See this link for more info. There is a link on that page for all the related projects.
Just for indexing - i am doing some benchmarking, and first approach i've tested - is PostgreSQL realization, using subqueries and intarray type.
So, i have traditional normalized database with tables
recipes (id, name, descr), pk(id)
ingridients (id, name, descr), pk(id)
r2i (recipe_id, ingridient_id), unique(recipe_id, ingridient_id) (seems i dont need that index, its equal to whole table)
name and descr columns filled with some junk just to make tables bigger ;-) Overall i filled that tables with 200 ingredients, 5000 recipes and each recipe have from 3 to 10 ingredients, overall about 35k rows in r2i.
Suppose i want to search recipes for my ingredient set 129,99,56,180
Query will look like:
SELECT recipe_id, recipe_ingrs, icount('{129,99,56,180}'::int[] - recipe_ingrs) as shortage, icount(recipe_ingrs - '{129,99,56,180}'::int[]) as excess
FROM (
SELECT id as recipe_id, array(select ingridient_id from r2i where r2i.recipe_id = recipes.id)::int[] as recipe_ingrs
FROM recipes WHERE recipes.id IN (select distinct recipe_id from r2i where ingridient_id IN (129,99,56,180))
) as t
ORDER BY excess ASC, shortage ASC;
Query cost about 7k (depends on set you querying for), but on my windows test notebook machine (c2duo, 2gb of ram) it runs very fast - instantly for human eye :)
There is doc available about intarray type.
Testing not yet completed, i have two more solutions to test, + obtain some numbers about speed.
精彩评论