阅读背景:

是否有可能使用Erlang,Mnesia和Yaws开发强大的网络搜索引擎?

来源:互联网 

I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with?

我正在考虑使用Erlang,Mnesia和Yaws开发一个网络搜索引擎。是否有可能使用这些软件制作功能强大且速度最快的网络搜索引擎?它需要做什么以及如何开始?

4 个解决方案

#1


19  

Erlang can make the most powerful web crawler today. Let me take you through my simple crawler.

Erlang今天可以成为最强大的网络爬虫。让我带您浏览我的简单抓取工具。

Step 1. I create a simple parallelism module, which i call mapreduce

步骤1.我创建一个简单的并行模块,我称之为mapreduce

-module(mapreduce).
-export([compute/2]).
%%=====================================================================
%% usage example
%% Module = string
%% Function = tokens
%% List_of_arg_lists = [["file\r\nfile","\r\n"],["muzaaya_joshua","_"]]
%% Ans = [["file","file"],["muzaaya","joshua"]]
%% Job being done by two processes
%% i.e no. of processes spawned = length(List_of_arg_lists)

compute({Module,Function},List_of_arg_lists)->
    S = self(),
    Ref = erlang:make_ref(),
    PJob = fun(Arg_list) -> erlang:apply(Module,Function,Arg_list) end,
    Spawn_job = fun(Arg_list) -> 
                    spawn(fun() -> execute(S,Ref,PJob,Arg_list) end)
                end,
    lists:foreach(Spawn_job,List_of_arg_lists),
    gather(length(List_of_arg_lists),Ref,[]).
gather(0, _, L) -> L; gather(N, Ref, L) -> receive {Ref,{'EXIT',_}} -> gather(N-1,Ref,L); {Ref, Result} -> gather(N-1, Ref, [Result|L]) end.
execute(Parent,Ref,Fun,Arg)-> Parent ! {Ref,(catch Fun(Arg))}.

Step 2. The HTTP Client

One would normally use either inets httpc module built into erlang or ibrowse. However, for memory management and speed (getting the memory foot print as low as possible), a good erlang programmer would choose to use curl. By applying the os:cmd/1 which takes that curl command line, one would get the output direct into the erlang calling function. Yet still, its better, to make curl throw its outputs into files and then our application has another thread (process) which reads and parses these files

步骤2. HTTP Client One通常使用内置于erlang或ibrowse中的inets httpc模块。但是,对于内存管理和速度(尽可能降低内存占用量),一个好的erlang程序员会选择使用curl。通过应用os:cmd / 1获取该curl命令行,可以将输出直接输入到erlang调用函数中。然而,更好的是,使curl将其输出转换为文件,然后我们的应用程序有另一个线程(进程)读取和解析这些文件

Command: curl "https://www.erlang.org" -o "/downloaded_sites/erlang/file1.html"
In Erlang
os:cmd("curl \"https://www.erlang.org\" -o \"/downloaded_sites/erlang/file1.html\"").
So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using the zip module
folder_check()->
    spawn(fun() -> check_and_report() end),
    ok.

-define(CHECK_INTERVAL,5).

check_and_report()->
    %% avoid using
    %% filelib:list_dir/1
    %% if files are many, memory !!!
    case os:cmd("ls | wc -l") of
        "0\n" -> ok;
        "0" -> ok;
        _ -> ?MODULE:new_files_found()
    end,
    sleep(timer:seconds(?CHECK_INTERVAL)),
    %% keep checking
    check_and_report().

new_files_found()->
    %% inform our parser to pick files
    %% once it parses a file, it has to 
    %% delete it or save it some
    %% where else
    gen_server:cast(?MODULE,files_detected).

Step 3. The html parser.
Better use this mochiweb's html parser and XPATH. This will help you parse and get all your favorite HTML tags, extract the contents and then good to go. The examples below, i focused on only the Keywords, description and title in the markup

第3步.html解析器。更好地使用这个mochiweb的html解析器和XPATH。这将帮助您解析并获取所有您喜欢的HTML标记,提取内容然后再好。以下示例,我只关注标记中的关键字,描述和标题


Module Testing in shell...awesome results!!!

shell中的模块测试......非常棒的结果!!!

2> spider_bot:parse_url("https://erlang.org").
[[[],[],
  {"keywords",
   "erlang, functional, programming, fault-tolerant, distributed, multi-platform, portable, software, multi-core, smp, concurrency "},
  {"description","open-source erlang official website"}],
 {title,"erlang programming language, official website"}]

3> spider_bot:parse_url("https://facebook.com").
[[{"description",
   " facebook is a social utility that connects people with friends and others who work, study and live around them. people use facebook to keep up with friends, upload an unlimited number of photos, post links
 and videos, and learn more about the people they meet."},
  {"robots","noodp,noydir"},
    [],[],[],[]],
 {title,"incompatible browser | facebook"}]

4> spider_bot:parse_url("https://python.org").
[[{"description",
   "      home page for python, an interpreted, interactive, object-oriented, extensible\n      programming language. it provides an extraordinary combination of clarity and\n      versatility, and is free and
comprehensively ported."},
  {"keywords",
   "python programming language object oriented web free source"},
  []],
 {title,"python programming language – official website"}]

5> spider_bot:parse_url("https://www.house.gov/").
[[[],[],[],
  {"description",
   "home page of the united states house of representatives"},
  {"description",
   "home page of the united states house of representatives"},
  [],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
  [],[],[]|...],
 {title,"united states house of representatives, 111th congress, 2nd session"}]


You can now realise that, we can index the pages against their keywords, plus a good schedule of page revisists. Another challenge was how to make a crawler (something that will move around the entire web, from domain to domain), but that one is easy. Its possible by parsing an Html file for the href tags. Make the HTML Parser to extract all href tags and then you might need some regular expressions here and there to get the links right under a given domain.

Running the crawler

您现在可以意识到,我们可以根据关键字为页面编制索引,并提供良好的页面修复计划。另一个挑战是如何制作一个爬虫(一种可以在整个网络中移动的东西,从一个域到另一个域),但这个很容易。可以通过解析href标签的Html文件来实现。使HTML Parser提取所有href标记,然后您可能需要一些正则表达式来获取给定域下的链接。运行爬虫

7> spider_connect:conn2("https://erlang.org").        

        Links: ["https://www.erlang.org/index.html",
                "https://www.erlang.org/rss.xml",
                "https://erlang.org/index.html","https://erlang.org/about.html",
                "https://erlang.org/download.html",
                "https://erlang.org/links.html","https://erlang.org/faq.html",
                "https://erlang.org/eep.html",
                "https://erlang.org/starting.html",
                "https://erlang.org/doc.html",
                "https://erlang.org/examples.html",
                "https://erlang.org/user.html",
                "https://erlang.org/mirrors.html",
                "https://www.pragprog.com/titles/jaerlang/programming-erlang",
                "https://oreilly.com/catalog/9780596518189",
                "https://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/speakers",
                "https://erlang.org/download/otp_src_R14B.readme",
                "https://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/register",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/submit_talk",
                "https://www.erlang.org/workshop/2010/",
                "https://erlangcamp.com","https://manning.com/logan",
                "https://erlangcamp.com","https://twitter.com/erlangcamp",
                "https://www.erlang-factory.com/conference/London2010/speakers/joearmstrong/",
                "https://www.erlang-factory.com/conference/London2010/speakers/RobertVirding/",
                "https://www.erlang-factory.com/conference/London2010/speakers/MartinOdersky/",
                "https://www.erlang-factory.com/",
                "https://erlang.org/download/otp_src_R14A.readme",
                "https://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/London2010",
                "https://github.com/erlang/otp",
                "https://erlang.org/download.html",
                "https://erlang.org/doc/man/erl_nif.html",
                "https://github.com/erlang/otp",
                "https://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2009",
                "https://erlang.org/doc/efficiency_guide/drivers.html",
                "https://erlang.org/download.html",
                "https://erlang.org/workshop/2009/index.html",
                "https://groups.google.com/group/erlang-programming",
                "https://www.erlang.org/eeps/eep-0010.html",
                "https://erlang.org/download/otp_src_R13B.readme",
                "https://erlang.org/download.html",
                "https://oreilly.com/catalog/9780596518189",
                "https://www.erlang-factory.com",
                "https://www.manning.com/logan",
                "https://www.erlang.se/euc/08/index.html",
                "https://erlang.org/download/otp_src_R12B-5.readme",
                "https://erlang.org/download.html",
                "https://erlang.org/workshop/2008/index.html",
                "https://www.erlang-exchange.com",
                "https://erlang.org/doc/highlights.html",
                "https://www.erlang.se/euc/07/",
                "https://www.erlang.se/workshop/2007/",
                "https://erlang.org/eep.html",
                "https://erlang.org/download/otp_src_R11B-5.readme",
                "https://pragmaticprogrammer.com/titles/jaerlang/index.html",
                "https://erlang.org/project/test_server",
                "https://erlang.org/download-stats/",
                "https://erlang.org/user.html#smtp_client-1.0",
                "https://erlang.org/user.html#xmlrpc-1.13",
                "https://erlang.org/EPLICENSE",
                "https://erlang.org/project/megaco/",
                "https://www.erlang-consulting.com/training_fs.html",
                "https://erlang.org/old_news.html"]
ok
Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best are Couch Base Server and Riak. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using say Memcached, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to be schemaless DBMS,which focuses on Availability rather than Consistency. Read more on Search Engines from here: https://en.wikipedia.org/wiki/Web_search_engine

#2


4  

As far as I know Powerset's natural language procesing search engine is developed using erlang.

据我所知,Powerset的自然语言处理搜索引擎是使用erlang开发的。

Did you look at couchdb (which is written in erlang as well) as a possible tool to help you to solve few problems on your way?

您是否看过couchdb(也是用erlang编写的)作为一种可能的工具来帮助您解决几个问题?

#3


2  

I would recommend CouchDB instead of Mnesia.

我会推荐CouchDB而不是Mnesia。

  • Mnesia doesn't have Map-Reduce, CouchDB does (correction - see comments)
  • Mnesia没有Map-Reduce,CouchDB没有(更正 - 见评论)

  • Mnesia is statically typed, CouchDB is a document database (and pages are documents, i.e. a better fit to the information model in my opinion)
  • Mnesia是静态类型的,CouchDB是一个文档数据库(页面是文档,在我看来更适合信息模型)

  • Mnesia is primarily intended to be a memory-resident database
  • Mnesia主要用于内存驻留数据库

YAWS is pretty good. You should also consider MochiWeb.

YAWS非常好。您还应该考虑MochiWeb。

You won't go wrong with Erlang

Erlang你不会出错

#4


1  

In the 'rdbms' contrib, there is an implementation of the Porter Stemming Algorithm. It was never integrated into 'rdbms', so it's basically just sitting out there. We have used it internally, and it worked quite well, at least for datasets that weren't huge (I haven't tested it on huge data volumes).

在“rdbms”contrib中,有一个Porter Stemming算法的实现。它从未集成到'rdbms'中,所以它基本上只是坐在那里。我们在内部使用它,并且它运行得很好,至少对于不是很大的数据集(我没有在大量数据上测试它)。

The relevant modules are:

相关模块是:

rdbms_wsearch.erl
rdbms_wsearch_idx.erl
rdbms_wsearch_porter.erl

Then there is, of course, the Disco Map-Reduce framework.

当然,还有Disco Map-Reduce框架。

Whether or not you can make the fastest engine out there, I couldn't say. Is there a market for a faster search engine? I've never had problems with the speed of e.g. Google. But a search facility that increased my chances of finding good answers to my questions would interest me.

无论你是否能在那里制造最快的发动机,我都说不出来。是否有更快的搜索引擎市场?我从未遇到过例如速度问题。谷歌。但是一个能够增加我找到问题答案的机会的搜索工具会让我感兴趣。


分享到: