Solved

Apache 404 and .htaccess files

Posted on 2007-03-28
4
1,564 Views
Last Modified: 2008-02-26
Hello All,

We have a bizzare problem which I hope somebody can shed some light on.  We have a content management system which we built, which uses the apache error diective and a .htaccess to control the 404 errors produced by apache.  Basically when a page / resource is not found, run a file and see if the user wants to create it.

Now the problem is that I believe the 404 errors are still produced by apache, so our rankings on google are going down the tubes.

My question is if we use the AccessFile Directive will apache still produce a 404 error even though we tell it to run another file?
0
Comment
Question by:seanostephens
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
4 Comments
 
LVL 50

Expert Comment

by:Steve Bink
ID: 18819697
AFAIK, the 404 would still be generated if you are allowing Apache to determine if the file exists or not.  What is the URL flow for a non-existant page being requested from your server?  For example, mine goes through mod_rewrite and sends all traffic to a PHP handler....the PHP file will determine where to send the user from there.

If you are simply replacing the standard 404 error page with one of your own designed to do work, that should not prevent the 404 from being generated...in fact, I would think that strategy would DEPEND on it being generated.  

Finally, you can download Firefox and an extension called "Tamper Data".  This will allow you to see what headers your web server is generating in real time.
0
 
LVL 15

Expert Comment

by:m1tk4
ID: 19705154
No comment has been added to this question in more than 21 days, so it is now classified as abandoned.

I will leave the following recommendation for this question in the Cleanup Zone:

  Accept: routinet {http:#18819697}

Any objections should be posted here in the next 4 days. After that time, the question will be closed.

m1tk4
Experts Exchange Cleanup Volunteer
0
 
LVL 50

Accepted Solution

by:
Steve Bink earned 500 total points
ID: 19719231
Current follow-up:

As of my last writing, my strategy was to prevent a 404 from being generated at all.  This was because none of the pages on my site actually exist - they are dynamically generated by PHP/MySQL each time they are called - and I did not want a genuine page to show up with a 404 header.  Since then, I've rewritten the code to produce a 404 once the handler determines there is no page to build.  Below you'll find the relevant snippets from my server conf files and loader code.  

# From httpd.conf
# intercepts file requests that do not actually exist
# this is a rather slow method involving a sub-request
# using an ErrorDocument for 404 will likely be much quicker
# rewrite for db_build_loader.php
RewriteCond %{LA-U:REQUEST_FILENAME} !-f
RewriteCond %{SCRIPT_FILENAME} !^/db_build_loader.php$
RewriteCond %{QUERY_STRING} !^.*loadreq.*$
RewriteRule ^/(.+)\.(html?|php)$ /db_build_loader.php?loadreq=%{REQUEST_FILENAME} [QSA,NC,NS,L]

<?
// inside db_build_loader.php
// mod_rewrite lands here if the page does not exist
// if parse instructions are not in db, this is a real 404
if (!($pagerow = $result->fetch_assoc())) {
      error_log("Could not locate $path");
      error_log(print_r($_SERVER,1));
      header("HTTP/1.0 404 Not Found");
      header("location:/404.php");
      die();
}
?>
0

Featured Post

How our DevOps Teams Maximize Uptime

Our Dev teams are like yours. They’re continually cranking out code for new features/bugs fixes, testing, deploying, responding to production monitoring events and more. It’s complex. So, we thought you’d like to see what’s working for us. Read the use case whitepaper.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In my time as an SEO for the last 2 years and in the questions I have assisted with on here I have always seen the need to redirect from non-www urls to their www versions. For instance redirecting http://domain.com (http://domain.com) to http…
As Wikipedia explains 'robots.txt' as -- the robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a websit…
Attackers love to prey on accounts that have privileges. Reducing privileged accounts and protecting privileged accounts therefore is paramount. Users, groups, and service accounts need to be protected to help protect the entire Active Directory …

739 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question