Go Premium for a chance to win a PS4. Enter to Win

x

Apache Web Server

19K

Solutions

14K

Contributors

The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.

Share tech news, updates, or what's on your mind.

Sign up to Post

Introduction
This article is intended for those who are new to PHP error handling.  It addresses one of the most common problems that plague beginning PHP developers: effective error visualization.

PHP error handling is well-documented in the online man pages, but the documentation often eludes beginners, who are trying to learn PHP by copying examples they found on the internet.  Copying the code without understanding the code is an antipractice, and there are so many bad PHP examples out there, it can sometimes be difficult to find good ones!  This article will help you get the most, quickly and easily, out of PHP's basic error handlers.  You can get excellent error information, informative and well-targeted, by just following a few simple examples.  So stop copying those obsolete internet examples, and instead make a few "teaching examples" for your own library.  By the time you've finished this article, you will have some great insights into how to diagnose the most common errors quickly and find good solutions.

What's Covered Here
PHP has unlimited opportunities for errors, but in practice only a few things are needed to get good diagnostics.  You need to be able to see the errors in PHP and you need to …
3
 
LVL 111

Author Comment

by:Ray Paseur
Comment Utility
Thanks, Martin.

Some time ago when I published articles, there was a cascade of approvals - something like "It's OK" then "It's good" then "It's really good" and each of these approvals gave some more points as well as comments about how the article can be made better.  Does E-E still do that?
0
Windows Server 2016: All you need to know
LVL 1
Windows Server 2016: All you need to know

Learn about Hyper-V features that increase functionality and usability of Microsoft Windows Server 2016. Also, throughout this eBook, you’ll find some basic PowerShell examples that will help you leverage the scripts in your environments!

If you are a web developer, you would be aware of the <iframe> tag in HTML. The <iframe> stands for inline frame and is used to embed another document within the current HTML document. The embedded document could be even another website.
0
In Solr 4.0 it is possible to atomically (or partially) update individual fields in a document. This article will show the operations possible for atomic updating as well as setting up your Solr instance to be able to perform the actions. One major motivation for using atomic updating is being able to change a part of the document without the need to regenerate the entire document. So if your document is created from many different data sources where fetching the data might be expensive, atomic updating might be worth looking into.
 

Getting started

First you must be using Solr 4.0. Older versions do not support atomic updates. And second, all fields in your schema.xml file must be set to stored. So if your schema file looked like this:

<field name="id" type="number" indexed="true" stored="true" required="true" />
<field name="title" type="text_en" indexed="true" stored="false"/>
<field name="submit_date" type="date" indexed="true" stored="false" />
<field name="views" type="number" indexed="true" stored="false" />

Open in new window

all stored="false" attributes must be changed to stored="true" so that your schema file looks like this:

<field name="id" type="number" indexed="true" stored="true" required="true" />
<field name="title" type="text_en" indexed="true" stored="true"/>
<field name="submit_date" type="date" indexed="true" stored="true" />
<field name="views" type="number" indexed="true" stored="true" />

Open in new window

Out of the box, atomic updating should work correctly if your schema file is configured correctly. If atomic updating does not work, it might be helpful to view the caveats and limitations section of the Solr wiki for other helpful information.
 


Atomically updating fields in SolrJ

Once your instance of Solr is up and running and configured correctly, you can atomically update a document using SolrJ.  When atomically updating a field, it is possible to perform four actions:

  • set (two operations in one command) - set a value or remove it if null is used as the value.
  • add – adds an additional value to a multi-valued field.
1
It is possible to boost certain documents at query time in Solr. Query time boosting can be a powerful resource for finding the most relevant and "best" content. Of course the more information you index, the more fields you will be able to use for your query time boosts. A useful application of query time boosting is giving a boost to the newest content.
 

Looking at an Example


Below is an example of a boost function that scores more recent content higher than older content. But first, for this example, we should define a schema.xml file:
<field name="id" type="number" indexed="true" stored="true" required="true" />
<field name="title" type="text_en" indexed="true" stored="true"/>
<field name="submit_date" type="date" indexed="true" stored="false" />
<field name="rating" type="number" indexed="true" stored="false" />

Open in new window

I won't go into detail too much about this schema, but it refers to a document with an ID, a title, a submit date, and a rating, which will be relevant to other examples in this article.

So the first example we will go over is the example given from the Solr Relavancy Wiki:

{!boost b=recip(ms(NOW/HOUR,submit_date),3.16e-11,1,1)}

Open in new window

There are many things going on in this one boost query, so I'll break it down from the inside out:

  • NOW - The time in milliseconds since the Epoch (January 1, 1970 (midnight UTC/GMT))
  • /HOUR - This operation rounds NOW to the start of the current hour
  • submit_date - a field the documents, in this case the submit_date.
  • ms(NOW/HOUR,submit_date) - ms is a function. As explained in the FunctionQuery wiki page, ms returns the difference in milliseconds between the arguments. So in this case, the difference between NOW and the submit date of the document.
  • recip(
0
If your site has a few sections that need to be secure when data is transmitted between the server and local computer, such as a /order/ section for ordering or /customer/ which contains customer data, etc it would of course be recommended to secure those sections of your website with SSL so they are https, while your main site is http.

This article will walk you through how to HTTPS secure subsections of you website on an Apache webserver using the mod_rewrite Module.

The alternative is to either (1) make your entire website to always be SSL secured, which unless the entire site does need to be secured, is not recommended due to the higher server load (http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#load) or (2) manually change your links between http or https, which is also not recommended since users could manually type the link in or you may miss a link an accidentally . This is why automatically securing just the sections that are required to be secure is the best way to secure your site.

However, the issue arises as to how to ensure every time someone visits those secure pages they are using SSL while visiting the main site they do not. If you forget to change even a single link to those pages and left the links as http:// instead of https:// users would get to the page unsecured. Additionally a user could always simply manually enter the URL into their web browser’s address bar without the https:// as well. The solution, of course, is for the web server to …
0
If you've heard about htaccess and it sounds like it does what you want, but you're not sure how it works... well, you're in the right place. Read on.

Some Basics

#1. It's a file and its filename is .htaccess (yes, with a dot in the front).

#2. It's an Apache feature. Other web servers will not use .htaccess files (at least not without some custom plug-in).

#3. It's an extension of the Apache configuration. There is nothing you can do in the .htaccess file that you cannot also do in the main Apache configuration, but the most popular use of .htaccess is to set up redirection using the mod_rewrite plugin. If you don't know what mod_rewrite is but you're on a shared hosting provider, it's probably enabled.

#4. It applies ONLY to the location / folder that it's in (and any subfolders). This means you can configure how Apache behaves within a specific folder just by creating the .htaccess file inside that folder.

#5. It will ONLY work if the main Apache configuration has been set up so that "AllowOverride" is enabled for that folder (or if that folder is inside of a folder tree with it turned on). This is usally enabled on most shared hosting providers.

A Sample Redirection

Okay, on to the fun stuff. Let's say we have visitors that go to http://domain.com and http://www.domain.com but we want ALL of them to end up on the "www" version. We would create an .htaccess file and put it into the base directory for our website, and then put the …
3
 
LVL 75

Expert Comment

by:käµfm³d 👽
Comment Utility
@arober11

I didn't get a chance to read the articles in-dept, but I scanned over them and I don't believe they address what I am talking about. Can you point me to the section of either article which addresses what I mention below?

@gr8gonzo

Not how to do it (i.e. what code to use), but rather what it is. As I mentioned, my understanding of "SEO" URLs and .htaccess file is that the .htaccess file takes a SEO link that a user clicked on, and it effectively pre-processes the request before the web server gets a chance to inspect the request. The rewrite engine takes the SEO link and turns it into the traditional querystring version. In my experience, many people do not realize this. They believe that the .htaccess file is going to modify the outgoing links in their HTML files to be more SEO-friendly. As I understand, this is not the case. One should embed the SEO-friendly URLs into their HTML, and then craft their .htaccess file(s) in such a way as to un-SEO the URL when it comes back to the server--before the web server sees which resource was requested.

If I'm decidedly ignorant in this regard, please let me know  : )
0
 
LVL 35

Author Comment

by:gr8gonzo
Comment Utility
@kaufmed, you're correct in that htaccess files don't do that, although I haven't come across anyone else that seems to think that's what htaccess files do. Then again, that could just be my own selection bias. I think the comments so far should help clear that up, though.
0
Hi, in this article I'm going to teach you how to run your own site, and how to let people in (without IP).

I'll talk about and explain each step... :)

By the way, everything in this Tutorial is completely free and legal. This article is for windows but will work on mac and linux as well.

1. wamp server
Wamp server it is basically a software system that runs a linux server. This software has everything you need to run a site including: php enigne, apache and MYSQL database. It does make it fairly easy to get a site up and running and it is simple to use.

MYSQL is a database (open source) that lets us work with data, SQL..
Apache is a software web server that organizes the requests for the server. If there is more then one user in, there should be a queue, so it works throught the queue one by one in order of the requests.
PHP engine runs our php code, the compiler converts our php source code to computer language... (010010101) hh :)
So... now you know the components, let's get to work.

download links:
1. http://www.wampserver.com/en/download.php - download the version that you want...
2. http://www.no-ip.com/downloads.php - download the version to your operating system..
3. register at no-ip site: http://www.no-ip.com/newUser.php - register, you will need to confirm the email, take that into account before you try to use.

OK, after that, setup the wamp server that you downloaded - when …
2
 
LVL 35

Expert Comment

by:gr8gonzo
Comment Utility
I've tried a lot of different WAMP packages (WampServer, XAMPP, Triad, etc...) but the best one so far has been EasyPHP. It is kept up to date frequently and has a simple, easy interface.

http://www.easyphp.org/
0
Over the last year I have answered a couple of basic URL rewriting questions several times so I thought I might as well have a stab at: explaining the basics, providing a few useful links and consolidating some of the most common queries into a single Article.
So let us start at the very beginning, with defining the term URL (Uniform Resource Locater).

URL’s Requests and 404’s

As the (link) explains a URL is just: A textual address for a Web based resources, it consists of 2 to 4 parts, and is  either manually entered, in a Browser’s address bar, or picked up through following an HTML based link e.g. http://www.somsite.com:81/subdir/index.php?someVar=2&anotherVar=xxxx

Where the constituent parts are:
The scheme (Protocol) e.g. http://
The Hostname or IP and optionally a port e.g.  www.somsite.com:81
The Path (URI) to the desired resource of the site e.g.  /subdir/index.php
The Query String (CGI parameters) the resource will take e.g.  ?someVar=2&anotherVar=xxxx
Note:
1) The Path and Query strings are optional.
2) A browser ONLY anchor / bookmark may be found at the very end of the URL e.g. #sectionTwo
this is not technically part of the URL and is not sent to the web server.

On receipt of a request a WEB server, such…
7
 
LVL 35

Expert Comment

by:gr8gonzo
Comment Utility
Nice job, arober11. I haven't used Apache for reverse-proxying before.
0
If you are running a LAMP infrastructure, this little code snippet is very helpful if you are serving lots of HTML, JavaScript and CSS-related information.

The mod_deflate module, which is part of the Apache 2.2 application, provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network. Using the mod_deflate module, you can compress HTML, cross-side scripting (CSS), JavaScript, XML and other text-related files as high as 40% of their original size, thus reducing overall server network traffic. Although, the compression of the data may result in a higher CPU load of your LAMP/Web server, this overhead is to be expected as the server is having to do the compression work rather than forcing the user's end client (browser).

I designed a deflation configuration that can be dropped in the /etc/httpd/conf.d directory (commonly found on RHEL, Fedora and CentOS distros), so that is has global mod_deflate capabilities. Just drop the code snippet as the 'deflate.conf' into the directory as specified and bounce the Apache/httpd instance (i.e. /etc/init.d/httpd restart). However, if you are running Debian or Ubuntu distro, then I recommend looking at this article first (http://www.control-escape.com/web/configuring-apache2-debian.html) for locating where to put the actual deflate.conf file.

For debugging and performance tuning needs, I added a special filter format to the code that allows you to separate …
0

Introduction

As you’re probably aware the HTTP protocol offers basic / weak authentication, which in combination with the relevant configuration on your web server, provides the ability to password protect all or part of your host.  If you were not aware and before you get to excited, note the HTTP protocol offers little more than the ability to request and transmit a user-id and password, in a non human readable manner, with every page request (the Authentication Basic header). So this approach shouldn’t be regarded as a secure solution or efficient, but may be useful in providing a deterrent to the uninitiated.

Anyway I hope the following either helps you enable MySQL based authentication, on an Apache host, or possibly points you elsewhere for your authentication solution:

Overview

Apache offers the ability, within its <DirectoryMatch>, <Directory >, <LocationMatch >, <Location > <FilesMatch > or <Files > blocks, to restrict access to:
• A Valid User             [via “Require valid-user “  or “AuthDBDUserPWQuery xxxxxx”]
• A Specific User        [via “Require user xxxx”   or “AuthDBDUserPWQuery xxxxxx”]
• A Group of Users     [via “Require group yyyy”   or “AuthDBDUserPWQuery xxxxxx”]

Credentials can be stored in local configuration (flat text), pseudo databases (.dbm file) or a database of your choice {MySQL, Postgress, ldap (Including a Windows DC), NIS...} (see documentation here
0
[Webinar] Cloud and Mobile-First Strategy
LVL 11
[Webinar] Cloud and Mobile-First Strategy

Maybe you’ve fully adopted the cloud since the beginning. Or maybe you started with on-prem resources but are pursuing a “cloud and mobile first” strategy. Getting to that end state has its challenges. Discover how to build out a 100% cloud and mobile IT strategy in this webinar.

As Wikipedia explains 'robots.txt' as -- the robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code.

The 'robots.txt' file can be broken down in many different ways (depending on standards and non-standard extensions), however rather than try to explain them all here, I will direct your attention to the Wikipedia site instead -- http://en.wikipedia.org/wiki/Robots.txt -- as it gives better 'case-by-case' examples.

Back to the issue at hand -- sometimes when websites are developed, the infamous 'robots.txt' file is missing from the site. As a result, search engine bots (i.e. GoogleBot, MSNBot) will detect that the robots.txt file is missing and will thus attempt to scan and search through each and every directory on your web server in an attempt to find underlying information to categorize, archive and post to the world.

In response to this conundrum, I have a Perl script on my Apache web server that will - in a way - mimic the presence of a robots.txt file for the site, thus restrict what the search bot's can and cannot have access to (i.e. directories, files) and when the bot is allow to visit the site and for how…
0
 
LVL 29

Author Comment

by:Michael Worsham
Comment Utility
This script was designed for handling several hundred virtual hosts so that only one file would have to be modified rather than individual robots.txt files for each site.
0
 
LVL 23

Expert Comment

by:Tony McCreath
Comment Utility
Using a common cgi folder. Nice idea.
0
In my time as an SEO for the last 2 years and in the questions I have assisted with on here I have always seen the need to redirect from non-www urls to their www versions.

For instance redirecting
http://domain.com
to
http://www.domain.com

From a Search Engine Optimisation perspective I recommend this as Search Engines may see this as 2 separate URLs serving the same content. In this case we would need to perform a server side 301 redirect. We use a 301 redirect for this because if any links that are passing PR have been pointed to the non-www version it will be transferred over time to the www version of the URL. I you were to use a 302 redirect you would not pass this important information over to the www version.

To create this redirect is simple. on your server you need to locate your .htaccess file and open it in your choice of text editor.

For going from non-www to www you will need to use the below lines and change domain.com for your domain name and TLD.
 
RewriteEngine on
RewriteCond %{HTTP_HOST} ^domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [L,R=301]

Open in new window


This will then work on all pages within your site for instance
http://domain.com/somefolder/anotherone/
would redirect to
http://www.domain.com/somefolder/anotherone/

If however you wanted to remove the www's from all of your URL's you would use the below lines  and also change the domain and TLD.
 
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.com [NC]
RewriteRule ^(.*)$ http://domain.com/$1 [L,R=301]

Open in new window


Shane Jones
2
 
LVL 3

Expert Comment

by:shrishti132
Comment Utility
Hi,

I was searching for some PAQs regarding non-www to www redirects for Apache. All talk about .htaccess redirect. But, my host does not allo mod-rewrite. Do you have any suggestions for this? If I upload .htaccess file, my website gives "internal server error".

Thanks.
0
 
LVL 3

Expert Comment

by:cduke250
Comment Utility
RewriteEngine on
RewriteCond %{HTTP_HOST} !^www\.domain\.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [L,R=301]

Open in new window


It's a bit safer to use a negation instead, otherwise you won't catch ww.domain.com or w.domain.com.
0

Apache Web Server

19K

Solutions

14K

Contributors

The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.