Link to home
Start Free TrialLog in
Avatar of continuity
continuityFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Unrestricted Access to Pages for Search Engine Crawlers

Hi,

I'm developing a Java-based website for a project I'm working on.  This site requires users to register and log-in in order to access the site content.  However, the client has requested that search engine crawlers should be able to index some of the restricted areas of the site.  When a page is found through one of these search engines, the link should take the requester to the registration page, and once registered to the page originally found during the search.  In order to implement this I believe I need some kind of filter at either the application server or web-app level.  This filter needs to control access to the secure, indexable pages based on the following rules:

1. if the requester is either a search engine crawler let them access the page.

2. if the requester is a browser but has a valid authenticated session then let them access the page.

3. in all other cases, redirect the browser to the login/registration page and store the requested page in the session.

What's the best way of implementing this?  Am I right in thinking I need to create a filter to do this and how would I create that filter?  We're using WebLogic as the app server for the site.

Many thanks in advance.
ASKER CERTIFIED SOLUTION
Avatar of bloodredsun
bloodredsun
Flag of Australia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of continuity

ASKER

That's great, thanks very much.