Apache Web Server





The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.

Share tech news, updates, or what's on your mind.

Sign up to Post

I had this problem with Google Chrome before, but now I have it with Firefox as well:

I'm running Ubuntu 17.10 on my laptop, with apache2. This is a development machine so I have numerous php sites defined as virtual hosts. This used to work perfectly on both chrome and firefox. But a couple of weeks (?) ago Chrome refused service, and now Firefox thinks it has to protect me from my own code.  I don't know if the problem is caused by a recent updat of Chrome or Firefox, or if this is caused by an apache update.

Now I can't access any of these virtual hosts anymore. I get some crap message about "Your connection is not secure" and some stuff about HSTS.
The thing is : I don't use https for these sites, and I don't want to use it.  All I'm developing are intranet applications NOT even accessible outside our company network, so I don't need HTTPS, and I couldn't even get certificates if I tried since there is no "official" domainname linked to these sites (they're all .lan, or .dev names)

I wasted a full day on this crap and nothing seems to work. How do I disable HSTS completely on my locally installed apache2 on MY OWN laptop? These sites on my laptop are development versions not accessible outside my laptop, so I don't need this.
Disabling HSTS for any .dev website would also be a solution.

Or alternatively does anyone know of a recent step by step "how to"  on using self-signed certificates that does work? I've tried several today but none of them seem to work…
Hire Technology Freelancers with Gigs
LVL 12
Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

See: https://blogs.msdn.microsoft.com/ieinternals/2009/07/20/internet-explorers-cache-control-extensions/

Generally, the pre-check directive is very similar to max-age. However, IE's implementation of max-age takes the Age response header into account. In contrast, the implementation of post-check and pre-check do not. Hence, pre-check is equivalent to max-age only when there is no Age header on the response.

This article is from 2009, so pretty old. How max-age and the Age header are related nowawdays? Or are they not related at all anymore?
I'm trying to create a single .htaccess file that makes seo-friendly URL's for the root directory AND subdirectories. For example:

testsite.com/index.cfm?p=about > testsite.com/about


testsite.com/subdirectory/index.cfm?p=widgets > testsite.com/subdirectory/widgets

I can do the first with the following code...

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule (.*) index.cfm?p=$1 [NC,L]

Open in new window

...but this will send testsite.com/subdirectory/index.cfm?p=widgets to testsite.com/widgets

Can you help me with redirecting subdirectories properly? Everything I've found on Google just shows you how to redirect specific subdirectories, not how to capture the subdirectory you're currently in. Thanks!
My question is about one specific reason to use max-age over Expires.

See for example: https://www.mnot.net/cache_docs/#EXPIRES
Although the Expires header is useful, it has some limitations. First, because there’s a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won’t be achieved, and caches might wrongly consider stale content as fresh.

But with max-age you also have the exact same problem, right? In my opinion there are 2 possibilities:

1. A cache receives a response from a server. The cache's clock starts counting from that moment. If there would be a delay between the server sending the response, and the cache receiving the response, then the age would be incorrect. So this is not a good way to calculate the age.

2. A cache receives a response from a server. The age of the response is calculated as a difference between the cache's current date and the Date general header included in the HTTP response.

Case 2 is in my opinion the right way to calculate the age of the response. But the reponse header field "Date" will be determined by the server. Just like "Expires" will be determined by the server. So in both cases the server's clock will be compared with the cache's clock. So in this respect (clock synchronization), I see no difference between max-age and Expires?

With case 1 they would be right, because then the cache's clock on moment A …
When a stored response is used to satisfy a request without validation, my browser is not showing me the HTTP reponse header field: "Age"?

Just take a simple test.html file, which is chacheable by default. Now visit the page 2 times, so the second time the file is shown directly from cache without validation.

Firefox shows me response headers like this:

Date: Mon, 12 Mar 2018 16:05:18 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Mon, 12 Mar 2018 12:24:12 GMT
Accept-Ranges: bytes
Content-Length: 143
Content-Type: text/html

Open in new window

But why Firefox does not show me the "Age" header field?

See: https://tools.ietf.org/html/rfc7234

When a stored response is used to satisfy a request without validation, a cache MUST generate an Age header field

And see: https://tools.ietf.org/html/rfc7234#section-5.1

However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age.

A browser's cache is not HTTP/1.0, so the response headers must contain an Age header field. Firefox is not showing me "Age"?

Are browsers only showing the response and request headers of the server? But if that's the case then they had to show no response headers at all, because there was no response from server in case of "200 OK (cached)"?

So I don't understand this? What's the logic behind this?

P.S. The example was about Firefox, but for example Chrome is doing the same.
After using apache and weblogic for more than 10 years(the last working module used is: mod_wl_22), I am ready to set up a replacement system with the newer version of the connector module (mod_wl_24) for our production.

I follow the official documentation from this link:


The server OS is:
root@server90 ~]# uname -a
Linux server90 4.1.12-94.3.9.el7uek.x86_64 #2 SMP Fri Jul 14 20:09:40 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux

Open in new window

Apache version:
[root@server90 ~]# apachectl -version
Server version: Apache/2.4.6 ()
Server built:   Oct 19 2017 14:54:33

Open in new window

APACHE_HOME folder details
[root@server90 httpd]# pwd
[root@server90 httpd]# ll
total 8
drwxr-xr-x 2 root root   58 Mar 10 21:58 conf
drwxr-xr-x 2 root root  103 Mar 10 21:56 conf.d
drwxr-xr-x 2 root root 4096 Mar 10 21:42 conf.modules.d
drwxr-xr-x 2 root root 4096 Mar 11 15:31 lib
lrwxrwxrwx 1 root root   19 Feb 22 16:32 logs -> ../../var/log/httpd
lrwxrwxrwx 1 root root   29 Feb 22 16:32 modules -> ../../usr/lib64/httpd/modules
lrwxrwxrwx 1 root root   10 Feb 22 16:32 run -> /run/httpd
[root@server90 httpd]# 

Open in new window

I created a lib folder at the APACHE_HOME folder and copy all the lib files and this connection module(downloaded from Apache foundation website) into this folder
[root@server90 httpd]# cd lib/
[root@server90 lib]# ll
total 138808
-rwxr-xr-x 1 root root  6990875 Mar 10 21:00 libclntshcore.so
-rwxr-xr-x 1 root root  6990875 Mar 10 21:00 libclntshcore.so.12.1
-rwxr-xr-x 1 root root 58793741 Mar 10 21:00 libclntsh.so
-rwxr-xr-x 1 root root 58793741 Mar 10 21:00 libclntsh.so.12.1
-rwxr-xr-x 1 root root   409107 Mar 10 21:00 libdms2.so
-rwxr-xr-x 1 root root  1768370 Mar 10 21:00 libipc1.so
-rwxr-xr-x 1 root root   544150 Mar 10 21:00 libmql1.so
-rwxr-xr-x 1 root root  6747034 Mar 10 21:00 libnnz12.so
-rwxr-xr-x 1 root root   346242 Mar 10 21:00 libons.so
-rwxr-xr-x 1 root root    98521 Mar 10 21:00 libonsssl.so
-rwxr-xr-x 1 root root    72281 Mar 10 21:00 libonssys.so
-rwxr-xr-x 1 root root   567319 Mar 11 15:24 mod_wl_24.so
[root@server90 lib]# 

Open in new window

After that, I added directive for loading the module  into the $APACHE_HOME/conf/httpd.conf file:
[root@server90 httpd]# cd conf
[root@server90 conf]# ll
total 36
-rw-r--r-- 1 root root 11814 Mar 11 00:49 httpd.conf
-rw-r--r-- 1 root root 13077 Oct 19 17:55 magic
-rw-r--r-- 1 root root  4104 Mar 10 21:58 weblogic.conf
[root@server90 conf]# cat httpd.conf 
LoadModule weblogic_module /etc/httpd/lib/mod_wl_24.so

Open in new window

Then verify if this apache web server has included the dynamic sharing module: mod_so.c
[root@server90 conf]# apachectl -l
Compiled in modules:
[root@server90 conf]# 

Open in new window

the next step is to try to test the syntax of httpd.conf:
[root@server90 conf]# apachectl -t
httpd: Syntax error on line 355 of /etc/httpd/conf/httpd.conf: Cannot load modules/mod_wl_24.so into server: libonssys.so: cannot open shared object file: No such file or directory
[root@server90 conf]# 

Open in new window

it shows some error message:

Open in new window

Failed to load https://lh6.googleusercontent.com/1h7-sykGV-pR9VIwNDq-paHX_q6kKW25ZJXaOocQCV6uAUSASlM20l4Dnb53zbD8rwJdHVCjkBZS35uxLSunYRp--cQv5e08SrB1=w800-rw: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://drive.google.com' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

Open in new window

I am getting the error message above in the Dev Tools console window in Chrome.

Website is running on Apache.

Can I fix this, or how can I troubleshoot this?  I've looked at information on CORS and it is difficult for me to digest or understand how to fix this.  I am also brand new to Apache web server.

Thank you!
I want the easiest way to link a desktop access file to mysql in the web cpanel
any aides please?
Hi all, I know this is all over every forum and I have tried and tried but just can't get it to work.
It is for a free image hosting service that allows hotlinkning, but not abusive hotinking, so they need to stop images being hotlinked from certain outside domains only, all other websites/forums etc can hotlink, in the same way imgur block hotlinking to sites that break their terms of service.

The .htaccess file looks like this but images are still hotlinked to eBay, any ideas?

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^https://(.+\.)?vipr.ebaydesc\.com/ [NC,OR]
RewriteCond %{HTTP_REFERER} ^https://(.+\.)?vi.vipr.ebaydesc\.com/ [NC,OR]
RewriteCond %{HTTP_REFERER} ^https://(.+\.)?ebay\.com/ [NC,OR]
RewriteCond %{HTTP_REFERER} ^https://(.+\.)?ebaydesc\.com/ [NC,OR]
RewriteCond %{HTTP_REFERER} ^https://(.+\.)?www.ebay\.com/ [NC]
RewriteRule .*\.(jpeg|jpg|gif|bmp|png)$ https://mydomain.com/nohotlinking.gif [L]

RewriteEngine on
RewriteRule \.(gif|jpe?g|png|bmp) 404.gif [NC,L]

Open in new window

The second rule is designed to show an image when the image at a particular url has been deleted, that works perfectly.

We have also tried variations such as,

RewriteCond %{HTTP_REFERER} ^http(s)?://(.+\.)?vi.vipr.ebaydesc(.+)?\.com [NC]

Open in new window


RewriteCond %{HTTP_REFERER} ^https://(.*\.)*ebay\.com [NC,OR]

Open in new window

But nothing works, now we know its possible as imgur do it.

Any ideas?

I've supported SSL certs and configured servers for years, so no newb here.

Had an interesting issue on CentOS 7.  
Let's call my SSL domain www.domain.com, we'll say IP is setup in the virtual host directive (Apache 2.4.6 / OpenSSL 1.0.2k-fips
And the server hosting it linux.domain.local, a CentOS 7.4.1708 box.

Cert is installed, and answering to www.domain.com, everything is good.

Here's where the issue rears it's ugly head -
For some local software, ansible + jenkins, we have to make a host file entry back to the machine's IP itself, so in my etc/host file, I placed:   www.domain.com

When I restart apache / openssl, I then get a domain mismatch warning in a browser when visiting the site. If I go to www.domain.com, it gives the mismatch , and when viewing the cert via the browser (view cert), it says the servername is actually linux.domain.local.

If I REM out the /etc/hosts file entry, and restart apache, SSL works as expected.

When the entry is in /etc/hosts, it appears to grab the rDNS name of the machine rather than serving up what I have specified in the <Virtualhost> directive.
To pre-answer, Yes, the directive ServerName is www.domain.com, and the virtual host is setup specifically with the IP:port (

Never seen this on any other Linux / RMP based flavor. One workaround seems to be setting the rDNS, but I don't want to rely on that for our production server(s).
I'd rather know WHY CentOS 7 …
Free Tool: SSL Checker
LVL 12
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

See: http://php.net/manual/en/function.session-cache-limiter.php

PHP is using:

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Open in new window

I don't understand this 100%. A cache could have a different idea about time. Although it's really rare, a cache could think it's 1980. In a case like that, the cached copy will be seen as fresh.

When using:

Expires: 0

Open in new window

you can avoid problems like that. So in my opinion PHP is choosing the second best solution instead of the best solution.

See: https://tools.ietf.org/html/rfc7234#section-5.3

A cache recipient MUST interpret invalid date formats, especially the
value "0", as representing a time in the past (i.e., "already

So when using the value "0", you know for sure it will be seen as a date in the past. But this is the protocol for HTTP/1.1 (not HTTP/1.0).

I was also searching for some information about HTTP/1.0 and invalid dates, but I could not find an answer. I know HTTP/1.0 CAN implement things from HTTP/1.1.

How HTTP/1.0 caches are dealing with invalide dates? And can I be sure that in all situations "Expires: 0" will be seen as a date in the past? And if no, do you have examples?

I saw Google is using:

Expires: -1

Open in new window

In the past people were setting Expires via html via the meta tag ... in cases like that "-1" could mean something different than "0", but in what kind of situations "Expires: -1" means something different than "Expires: 0" in the http headers?

So what to use? Date in the past, 0 or -1?
I have configured Apache on Windows 7 machine to run our digital singage media on multiple screens which I got it working fine but I have got in to another problem I can't get to our own website now, every time we type our Web address www.example.com it takes us to Apache server page can you guy's please tell how to resolve this problem.
1. Where or what is the default Apache (2.x?) config file for Windows as I see more than one? I see three httpd*.conf (each with a date in the filename, possible backup) and one called httpd.conf (most recent time stamp). They were in the folder
C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf

2. I'm looking for the access.log file(s) showing inbound requests which I believe I found with all the other files such as apache_reverse*.log, SSLaccess.log.* and so forth, but the access.log files are 0 byte. It might be a permissions issue, but is it called access*.log? All the files mentioned were found in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\cache-proxy\logs.

3. If you are using an ELB (AWS) wouldn't it mask/substitute the client's IP with it's own? I say this b/c the SSLaccess.log files "seem" to be showing the IPs of ELB and wasn't sure if the access.log, if it contained any data would be any different. Both log settings are using the parameter %h to capture client's IP.

httpd.conf file contents:

CustomLog "logs/access.log" combined
LogLevel debug

ErrorLog "C:\Program Files (x86)\Apache Software Foundation\apache2.2\cache-proxy/logs/apache_error.log"

CustomLog "C:\Program Files (x86)\Apache Software Foundation\apache2.2\cache-proxy/logs/apache_reverse_%m%d%y.log" custom

Thank you
I'm trying to understand:

Vary: Accept-Encoding

Open in new window

Let's say we have:

- client 1 (only understands gzip-encoding)
- client 2 (only understands deflate-encoding)
- a shared cache
- a server (supports gzip and deflate encoding / compression, so the server can send the response message body encoded / compressed)
- a resource (1 url, cacheable)

If client 1 first will make a request to the resource, then the response will be stored in cache. The resource is gzip-encoded. If now client 2 will make a request, then the cache will server the gzip-encoded version which client 2 does not understand.

This is what I understand about it from the internet. But this sounds weird to me.

1. The stored reponse in cache must contain: "Content-Encoding: gzip", because when a server will send an encoded response, it will let you know which encoding has been used. So if I would be a cache and I would get a request with "Accept-Encoding: deflate" (or with an empty value). As a cache I know that my stored response is gzip-encoded (because of the stored "Content-Encoding: gzip"). Then I don't need no "Vary: Accept-Encoding" to know that I have to make a new request to the server??

So why "Vary: Accept-Encoding" exists anyway and in what kind of situations it really makes a difference?

2. Are there also caches around, which can decode / encode (gzip / deflate)? In cases like there is also no need to add "Vary: Accept-Encoding", because a cache could decode …
If you have for exampe an image with max-age=31536000, when using HTTPS what is the best to do:

Cache-Control: public, max-age=31536000

Open in new window

Cache-Control: private, max-age=31536000

Open in new window

Cache-Control: max-age=31536000

Open in new window

Which one and why?

I also did some own research, but I'm not sure yet what the answer has to be. I think this is true:

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received.

This is about the cache of the browser. For shared caches I think this is true:

If the request is authenticated or secure (i.e., HTTPS), it won’t be cached by shared caches.

Google is saying here, see: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching

If the response is marked as "public", then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn't normally cacheable. Most of the time, "public" isn't necessary, because explicit caching information (like "max-age") indicates that the response is cacheable anyway.

That's what Google is saying, but I also checked what they are doing. See:

Example: https://www.google.nl/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png
cache-control:private, max-age=31536000

Open in new window

Example: https://www.google.com/textinputassistant/tia.png
cache-control:public, max-age=31536000

Open in new window

Reponse headers can contain something like:

Cache-Control: must-revalidate

Open in new window

But "must-revalidate" does not exist for the request headers, see:


Why? Is there a reason behind this?

Take for example me, my browser, my browser's cache and the origin server. Let's say there is a stale cached copy in the browser's cache. Imagine I don't want the cached copy to be served without making any request to the server. Also not if the cache is disconnected from the origin server. I could add must-revalidate in the request headers, but this only exists for the response headers in similar situations.

Why is that and what's behind it? Directives like max-age, no-cache, no-store you have for the response AND the request directives, so why must-revalidate is an exception to that?
I have nginx running as a reverse proxy successfully. I am looking to simplify the listen ports..

Is there a way to create 2 or more listen ports for a given entry?

for example

server {
    listen 80, 443;
    server_name example.domain.com;
    location / {
Let's first take a look at the definitions.

1. Max-age in request headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.1

The "max-age" request directive indicates that the client is
unwilling to accept a response whose age is greater than the
specified number of seconds.  Unless the max-stale request directive
is also present, the client is not willing to accept a stale

2. Max-age in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#page-26

The "max-age" response directive indicates that the response is to be
considered stale after its age is greater than the specified number
of seconds.

And see: https://tools.ietf.org/html/rfc7234#section-4.2.4

A cache MUST NOT send stale responses unless it is disconnected
(i.e., it cannot contact the origin server or otherwise find a
forward path)

So is it true that "max-age=0" in the response headers is NOT equivalent to "no-cache" in the reponse headers (because of case disconnected), BUT "max-age=0" in the request headers IS equivalent to "no-cache" in the reponse headers?

3. No-cache in the request headers:
See: https://tools.ietf.org/html/rfc7234.html#page-23

The "no-cache" request directive indicates that a cache MUST NOT use
a stored response to satisfy the request without successful
validation on the origin server.

4. No-cache in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.2</a>
I have uploaded a backup of a WordPress Site, including DB, but now the site will not load.

I get a HTTP-error 500

In the logfile I get:
AH01071: Got error 'PHP message: PHP Fatal error: Class 'Requests_Hooks' not found in /var/www/vhosts/domain.tld/httpdocs/wp-includes/class-wp-http-requests-hooks.php on line 17\n', referer: http://domain.tld/?page_id=1776.

Any Idea?

Free Tool: Site Down Detector
LVL 12
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

i have a web application that is work on wamp on windows server and its work fine, i moved the application folder to linux ubuntu 16.4 with apache.
i see the permission and apahe config.
when i try to open the application is't give me 404 page not found
under ci log the error is page not found index
so where is the problem?
I have a website/server where I would like the following to happen.

  • If a person types the URL: http://mydomain.com/apple it should go to: http://mydomain.com/pickfruit.php?a=apple
  • And if they enter: http://mydomain.com/pear it goes to: pickfruit.php?a=pear
  • I also have the need for the ability of legit sub-directories to exist. For example, http://mydomain.com/admin would to take you into that sub-directory and would NOT redirect to pickfruit.php?a=admin.

So what I am looking for is a solution where I do not have to create all of the individual fruit directories and then put a header redirect in each directory.  

I assume this can be done with the .htaccess file or something like that? Any insight would be appreciated. I can use .php if that helps as well.
Running the following CURL command:
curl https://tlstest.paypal.com

Open in new window

I am faced with an error to do with the  SSL certificate:

curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). The default
 bundle is named curl-ca-bundle.crt; you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.
[#### ~]$ curl --tlsv1.2 https://tlstest.paypal.com/
curl: option --tlsv1.2: is unknown
curl: try 'curl --help' for more information
[#### ~]$ curl --tlsv1 https://tlstest.paypal.com/
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA)
I'm getting the following error on my bluehost server when i use a certain script.

Not Acceptable! An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.

I think it has something to do with the .htaccess file, if so how can I edit it to remove this issue ?

Here is my htaccess file from the public_html folder (some URLs replaced with 'mydomain'):

RewriteEngine on
# Use PHP5.6 as default
# AddHandler application/x-httpd-php56 .php
RewriteCond %{HTTP_HOST} ^mydomain\.net$ [OR]
RewriteCond %{HTTP_HOST} ^www\.mydomain\.net$
RewriteCond %{REQUEST_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^/?$ "http\:\/\/www\mydomain\.com\/" [R=302,L]

# php -- BEGIN cPanel-generated handler, do not edit
# Set the “ea-php56” package as the default “PHP” programming language.
<IfModule mime_module>
  AddType application/x-httpd-ea-php56 .php .php5 .phtml
# php -- END cPanel-generated handler, do not edit

Open in new window

Thanks a lot
Decyptor.class in Apache POI application contains password in clear text. Why is this? Isnt this a security issue?
My vendor is telling me that that's the architecture by Apache. Is the vendor correct?
I need to redirect most links of my website, including the home page, to a domain with a different domain prefix. But I need to NOT redirect all links that include a specific directory following the domain in the URL. Here's what it looks like:

All links from "admin.domain.com" must redirect to "www.domain.com" EXCEPT for all links that begin with "admin.domain.com/administrator".

What is the htaccess code that will do this, assuming it can be done?



Apache Web Server





The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.