"The query contained only ignored words" (MS Indexing Service)

Hello experts, I have a bothering problem with MS Indexing Service.
Actually I'm running a website and I want to create a Search engine with MS Indexing service.
My content files are not Ascii encoded (arabic characters). So after creating the search utility and testing  it successfully on my personal Computer, I'm getting the following error message on my online server: "The query contained only ignored words".
I get this error only for non ascii words, I know that arabic language is not supported by Indexing service, so how to set the Wordbreaker to the "Neutral" language.
I'm waiting for your kind help.
niazidAsked:
Who is Participating?
 
Ralf KlattConnect With a Mentor Principal ConsultantCommented:
Hi,

well, the header information I'm using for arabic sites looks like this:

<html xmlns="http://www.w3.org/1999/xhtml" lang="ar" dir="RTL">
      <head>
            <meta http-equiv="Content-Type" content="text/html; charset=windows-1256">

You might try it on your websites and test it.


Best regards,
Raisor
0
 
Ralf KlattPrincipal ConsultantCommented:
Hi,

There's a product at http://www.innerprise.net/products.asp that provides "searchengine" functionality with arabic support as you may read in their admin guide: http://www.innerprise.net/sql/manual/admin_guide.pdf

I'm not telling you to buy that product ... but at least there must be a way to make it possible using SQL Server ...

I have many different languages in a SQL Server Database -> the fields that keep greek, russian, japanese and arabic text are of the type nvarchar and a simple text search does to find certain values ... so if you'd place the contents of your web pages in a database with a related link to the http address of the webpage you'd be able to some kind of "simulate" what you're up to do ... the only issue I'd see is when it comes to very long peaces of text that must be kept in a ntext field -> SQL server doesn't allow text search on ntext fields ... the workarround would be to chunk long text peaces into many smaller ones and store them as part1, part2 etc <- with a relation to the webpage they can be found on ...

If that's too much in first place you may have a look at this freeware -> for use or simply for looking at their approach: http://www.egsx.com/products/ebhath/


Best regards,
Raisor
0
 
niazidAuthor Commented:
Thanks Raisor, but I don't want to put ny text files in the database.
0
 
Ralf KlattPrincipal ConsultantCommented:
Hi,

In that case http://www.egsx.com/products/ebhath/ could be useful as it's for websites and intranets ...


Best regards,
Raisor
0
 
niazidAuthor Commented:
Thanks Raisor, I'm used to MS Indexing Servcie, it's really good.
this note may be important: my text files are UTF8 encded, and I put a Charset header string in the top of every text file:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
0
All Courses

From novice to tech pro — start learning today.