Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x

Apache Web Server

19K

Solutions

14K

Contributors

The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi,

We have an Apache webserver running Perl + Template Toolkit and a MySQL database for user access on a VM running Windows 2012 R2.

We have recently made some tweaks to the Perl code, nothing big, just a few edits and extra templates and some design changes (CSS etc). this was tested for several months on an identical dev server and we didn't see any issues.

We went live with the changes and everything was fine for about a week but now we are seeing an odd issue where the site hangs when navigating between pages then works fine for a bit and then hangs again. No pattern to this at all. Sometimes its fine for a few minutes, other times its seconds before it hangs again.

CPU usage is about 15%, Ram is at 80% but always as been as we only have 2GB on here. Neither of them peak or change when the browser hangs.

The issue we are seeing is if we browse the website using the local IP then we see this issue.

If we browse the site using localhost in the same browser (different tab) we never see this issue.

So we can be sat with the site hanging on the IP tab but still able to browse about in the other tab on localhost.

Our db is tiny (only about 200 users, the site isn't hugely busy and nothing else has changed.

Before we go rolling the system back, should we be looking anywhere in particular that could cause this disparity between browsing using the local IP and localhost?

We've:

Reset the IP stack, reinstalled VMtool drivers and reset up the …
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE
LVL 5
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

I am trying to install multiple mediawiki farm sharing same resources (4 in total), exactly like Wikisource.org, on Amazon Web Services EC2. This installation will consist of the main wiki in English (wikiexample.org), two languages sub domains (lang1.wikiexample.org, lang2.wikiexample.org) and a commons hosting their media files (commons.wikiexample.org).

The wikis will have the extensions of Wikisource like ProofreadPage, PDF handler, Djvu extension and the Translate extension.

It should be possible to maintain and upgrade the entire wikis centrally and not one at a time.

Can anyone please put me through how to go about this?

Thank you.
0
After a security review of our new WordPress site it was pointed out that we're vulnerable to "External Service Redirecton - DNS". Specifically, if a URL is entered into the "Your Name" field of our Contact 7 Form then the testers have found that: "It was possible to induce the application to perform server-side DNS lookups of arbitrary domain names"

The suggested remedial action is to implement a whitelist of permitted services and hosts and to block interaction not on this whitelist.

I'm something of a newbie when it comes to this, and it occured to me (perhaps wrongly!) that there may be different whitelists; one for those who cannot enter the site, and a separate for sites to which our server is allowed to speak. Or does a whitelist imply both ways?

Anyway, all help on this gratefully received and I'm imagining this is something that's been done a zillion times before!

I'm using IIS and would prefer that answer, although Apache related help just as good because I've realised I can kind of 'translate' how to do it once I've got the idea.

Incidentally, we definitely want to avoid editing the Contact 7 form's code too much becasue this may be lost when we upgrade, even though I dare say this would fix the issue. Unfortunately the latest version has the same problem, although will let the Contact 7 team know to look into this. Would ideally like to use another form for data collection of this sort although i'm a part of a team that prevents this!

And so, in …
0
After I change the port number to 591 in httpd.conf under the apache folder in xampp,  where should I do that for myphpadmin page  ?  So when I click on the admin button inside the xampp control, it will take me directly.
0
after I ran below, my owncloud become like this, any idea how to fix it ?

semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/owncloud/data'
restorecon '/var/www/html/owncloud/data'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/owncloud/config'
restorecon '/var/www/html/owncloud/config'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/owncloud/apps'
restorecon '/var/www/html/owncloud/apps'

setsebool -P httpd_unified 1
setsebool -P httpd_execmem 1


123
0
BACKGROUND:
A ways back, I'd set up nameservers on my VPS (let's call them 'ns1.mydomain.com' and 'ns2.mydomain.com').  I host a couple of dozens websites on that VPS.

For all of my domains, on the domain registrar's site, I'd set the Nameservers for that domain to Custom Nameservers:  'ns1.mydomain.com' and 'ns2.mydomain.com'.

Recently, I had to ask my VPS provider to create a new server for me (let's call it 'newVPS'), leaving my previous VPS (let's call it 'oldVPS') active so I could migrate or re-create accounts and contents from the oldVPS to the newVPS.

Both the oldVPS and newVPS use WHM/CPanel admin interfaces.  
The oldVPS is setup as (cut and pasted from the WHM panel banner): 'CENTOS 6.9 i686 virtuozzo – oldvps  WHM 56.0 (build 52)'
The newVPS is setup as (cut and pasted from the WHM panel banner): 'CENTOS 7.4 virtuozzo [newvps]  v68.0.21'

My understanding (which is limited in these areas) is that the nameservers I setup on my VPS have to be associated with one of the domains I own/host on that VPS.

The nameservers which I had previously setup on oldVPS were associated with 'mydomain.com' one of the domains/accounts hosted on oldVPS.  

For simplicity, I'm thinking of creating new nameservers on newVPS and associate them with 'myotherdomain.com', another domain/account to be hosted on newVPS.

QUESTION:
How do I create my new nameservers on newVPS, say 'ns1.myotherdomain.com' and 'ns2.myotherdomain.com', presumably from newVPS's WHM (I'm …
0
I am trying to help a friend and after editing a few files the site is getting an error 500
attached are a few files from the web site
If I type the domain/index-static.html it loads but no links are working

the domain is http://datlasestates.com/

I am not sure what needs to do any guidance would be appreciated

I know the site is hosted on amazon server could this be from their side or could it be a scripting error?


here is the .htaccess file

GeoIPEnable on
SetEnvIf GEOIP_COUNTRY_CODE US AllowCountry
SetEnvIf GEOIP_COUNTRY_CODE CA AllowCountry
SetEnvIf GEOIP_COUNTRY_CODE RS AllowCountry
# SetEnvIf GEOIP_COUNTRY_CODE UA AllowCountry
# SetEnvIf GEOIP_COUNTRY_CODE RU AllowCountry

Deny from all
Allow from env=AllowCountry


RewriteEngine on

#Redirect 301 /index.html /index.php
Redirect 301 /about.html /about.php
Redirect 301 /faq.html /faq.php
Redirect 301 /recently_purchased.html /recently-purchased.php
Redirect 301 /contact.html /contact.php
Redirect 301 /sell.html /sell.php
Redirect 301 /blog.datlas.html /blog.datlas.php
Redirect 301 /philadelphia.html /philadelphia.php
Redirect 301 /pennsylvania.html /pennsylvania.php
Redirect 301 /victorian_jewelry.html /victorian-jewelry.php
Redirect 301 /art-nouveau-beauty-of-nature.html /art-nouveau-beauty-of-nature.php
Redirect 301 /edwardian-style-hail-to-the-king.html /edwardian-style-hail-to-the-king.php
Redirect 301 /faberge-a-life-of-its-own.html /faberge-a-life-of-its-own.php
index-static.html
index.php
0
I am having difficulty locating the Apache Web Server configuration module or which file to configure to set the limit of allowed simultaneous session requests.  The Apache Web Server is installed on Solaris 10.

Any assistance would be great appreciated.
0
Hi All,

I was wondering if anyone can point me in the right direction. I am currently trying to set up a monitoring system for our internal solr system with Nagios monitoring system with a plugin called Opsview, I would like to test this in a virtual environment such as Ubuntu before I roll this out in a live solr server but essentially I am trying to get alerts for the solr system.

I am currently using this as a guideline.

https://leanjavaengineering.wordpress.com/2011/12/07/monitoring-apache-solr/

If anyone has good information that can help, please let me know.

Thanks.
0
There is a issue with Apache Struts and I have a server running Apache (Trend Micro AV) and need to find out what version of struts it is using. The vulnerable version is "Struts 2.5 - Struts 2.5.14". I tried searching (all drives on the server) for struts*.jar and found nothing. Is there a way to find out what version is running on the server? It is not internet accessible.
0
Concerto Cloud for Software Providers & ISVs
LVL 5
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

For example:

RESPONSE HEADERS (287 B)

HTTP/1.1 200 OK
Date: Fri, 01 Dec 2017 10:50:12 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Fri, 01 Dec 2017 10:36:49 GMT
Accept-Ranges: bytes
Content-Length: 143
Keep-Alive: timeout=1, max=100
Connection: Keep-Alive
Content-Type: text/html

Open in new window


Firefox and Chrome are saying that the size of the response headers above are: 287 B

You can check the amount of bytes via for example: https://lingojam.com/ByteCounter

HTTP/1.1 200 OK							15
Date: Fri, 01 Dec 2017 10:50:12 GMT				35
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16	59
Last-Modified: Fri, 01 Dec 2017 10:36:49 GMT			44		
Accept-Ranges: bytes						20
Content-Length: 143						19
Keep-Alive: timeout=1, max=100					30
Connection: Keep-Alive						22
Content-Type: text/html						23
------------------------------------------------------------------------- +
15+35+59+44+20+19+30+22+23=					267 Bytes

Open in new window


With other tests I already found out that behind every line is coming 2 bytes extra (enter). And with other tests I found out that the status code is also part of this calculation.

There are 9 lines above, but I'm not sure if there is an enter coming after the last line. So we have 2 options:
9 * 2 = 18 Bytes
8 * 2 = 16 Bytes

So:
267 + 16 = 283 Bytes
or
267 + 18 = 285 Bytes

But the browser says 287 Bytes. So there is a difference of 2 or 5 Bytes between my calculation and the calculation of the browser.

The questions is:
- Is there also an enter coming after the last line?
- Where are the extra Bytes coming from exactly? Is there somewhere some white line and where exactly?

So with other words: How to calculate the size of the response headers?
0
I cheched the default behavior of my Apache server:

- In case of a PHP file without sessions in it (session_start), the server responds without Cache-Control and Last-Modified in the reponse headers.
- In case of a PHP file with sessions, it responds with i.a.

Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache

Open in new window


I understand the second case and why Apache is doing this.

I don't understand the first case 100%. Apache does not include the Last-Modified header, because apparently by default it assumes a PHP file is dynamic, although a PHP file is not necessarily dynamic, for example:

<?php
    echo 'test';
?>

Open in new window


This is pretty logical of Apache, because you could just use an html file in a case like that. So we have to see the file as something like:

<?php
    echo microtime();
?>

Open in new window


Now every request the file is different. So the Last-Modified header would not make sense and that's why Apache is not serving (by default) the Last-Modified header in case of PHP files.

Till now I understand everything and it's clear.

Now check this: https://tools.ietf.org/html/rfc7234#page-5

A cache should do by default:

Although caching is an entirely OPTIONAL feature of HTTP, it can be assumed that reusing a cached response is desirable and that such reuse is the default behavior when no requirement or local configuration prevents it. Therefore, HTTP cache requirements are focused on preventing a cache from either storing a non-reusable response or reusing a stored response inappropriately, rather than mandating that caches always store and reuse particular responses.
0
I have a URL like http://www.myweb.com/restAPI/index.php which holds my routes for my RestApi and works fine.
So for example if i call it  http://www.myweb.com/restAPI/index.php/player/read it works fine but i would like to call it with
just http://www.myweb.com/restAPI/player/read which currently fails with the folowing error"The requested URL /restAPI/player/read was not found on this server."
My rewrite currently looks like this
Options -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^/?([a-z]+)$ $1.php [L]
RewriteRule ^ index.php [QSA,L]

so what i would like is that everything which calls anything in my http://www.myweb.com/restAPI/ will be routed to the index.php so it can be routed correctly.
0
My question is about: https://www.mnot.net/blog/2007/05/15/expires_max-age

They're saying:

The problem with that line of reasoning is that HTTP versions aren’t black and white like this; just because something advertises itself as HTTP/1.0, doesn’t mean it doesn’t understand HTTP/1.1 (see RFC2145 for more).

But here they are saying:

https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3

If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive. This rule allows an origin server to provide, for a given response, a longer expiration time to an HTTP/1.1 (or later) cache than to an HTTP/1.0 cache.

So or the article is incorrect, or W3 is incorrect (or I'm wrong :p). With the last sentence, W3 means you can give a different expiration time to a HTTP/1.1 cache (or later), compared with a HTTP/1.0 cache. You can do this by using max-age and the Expires header.
So they can only say something like that, by assuming the HTTP/1.0 cache will ignore the max-age, because otherwise you will just have the same expiration time for all the caches (HTTP/1.0 and HTTP/1.1 et cetera).

So what is true about HTTP/1.0 caches understanding max-age?
0
Hello,

I am new to Hadoop.  I have a question regarding yarn memory allocation.  If  we have 16GB memory in cluster,  we can have least 3 4GB cluster an keep 4 GB for other uses.  If a job needs 10 GB RAM, would it use 3 containers or  use one container and will start using the ram rest of the RAM ?
0
After a security review of our new WordPress site it was pointed out that we're vulnerable to "External Service Redirecton - DNS". Specifically, if a URL is entered into the "Your Name" field of our Contact 7 Form then the testers have found that: "It was possible to induce the application to perform server-side DNS lookups of arbitrary domain names"

The suggested remedial action is to implement a whitelist of permitted services and hosts and to block interaction not on this whitelist.

I'm something of a newbie when it comes to this, and it occured to me (perhaps wrongly!) that there may be different whitelists; one for those who cannot enter the site, and a separate for sites to which our server is allowed to speak. Or does a whitelist imply both ways?

Anyway, all help on this gratefully received and I'm imagining this is something that's been done a zillion times before!

I'm using IIS and would prefer that answer, although Apache related help just as good because I've realised I can kind of 'translate' how to do it once I've got the idea.

Thanks in advance Iain
0
I have recently moved from using the default Apache2 Handler + mpm_prefork to FastCGI + mpm_event. Everything looks fine except for this error message that keeps appearing in the error.log every few minutes:

AH00524: Handler for fastcgi-script returned invalid result code 32

I have not been able to find more details or a solution for this problem.

On more thing is previously when I run an API call, I would receive a timeout (500 Internal Server Error) at 30s into the request. I updated the /etc/apache2/conf-available/php5-fpm.conf with -idle-timeout 300, and it would continue to run beyond 30s and return data properly.

However, after the API call completes I also see the same error:

AH00524: Handler for fastcgi-script returned invalid result code 32

What configuration could I be missing?
Thanks in advance!
0
See: http://www.freesoft.org/CIE/RFC/2068/168.htm

End-to-end reload
The request includes a "no-cache" Cache-Control directive or, for compatibility with HTTP/1.0 clients, "Pragma: no-cache". No field names may be included with the no-cache directive in a request. The server MUST NOT use a cached copy when responding to such a request.

My question is about the bold part, because I don't understand it 100%. By saying: "The server MUST NOT use a cached copy when responding to such a request.", they are actually also saying that there are situations where a server is using a cached copy, when responding to a request.

I know and understand that the cache of my browser, can give me a cached copy, but that's not "the server". So how I have to see that? About what kind of server they are talking about? Because a normal server (no proxy server or clustered hosting or something like that) will just give back the entity-body (resource) or "304 Not Modified".

So with other words, usually there is a file on a server and the server will just give the file back OR it will say it hasn't been changed (304). Usually by default there is no caching.

So are they talking here specific about reverse proxy servers or something? It looks like they are just saying it in general, so that's what I don't understand about it. Or about what specific situations and servers they are talking about? How I have to see those words?
0
See: http://www.freesoft.org/CIE/RFC/2068/168.htm

The request includes a "no-cache" Cache-Control directive or, for compatibility with HTTP/1.0 clients, "Pragma: no-cache". No field names may be included with the no-cache directive in a request. The server MUST NOT use a cached copy when responding to such a request.

See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control

Caching directives are unidirectional, meaning that a given directive in a request is not implying that the same directive is to be given in the response.

no-cache
 Forces caches to submit the request to the origin server for validation before releasing a cached copy.

So actually on the Mozilla website they are saying that this is the definition of "no-cache" for a request and for a response. Otherwise they had to give 2 definitions (one for requests and one for responses). But that's not the case. From this definition, in some cases you're using the cached copy.

But in the first quote from Freesoft.org, they are saying:

The server MUST NOT use a cached copy when responding to such a request.

But with "no-cache" I would expect a check if some content has been changed. But if the content has not changed then I would expect, using a cached copy.

So how I have to see this? Now to me it looks like both definitions contradict each other.
0
Free Tool: SSL Checker
LVL 11
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Is there a reliable way to force a php script to exit, or the connection to a 3rd party site to quit, after a certain period of time?  I need to display updated stock data on our intranet. Since the Yahoo Finance API recently disappeared, I'm now using an API provided by Alphavantage. I don't want to directly display 3rd party data on our site. So my system works in two parts: The first script (fired by a cron, every 15 minutes) gets CSV data, via their API call and stores it in a temp DB. (All tables are truncated at the end of the night.) The second script queries the DB and displays it on our site. Most of the time everything goes smoothly. But occasionally the first script hangs, causing the CPU usage to start increasing, eventually crashing the site.  But I don't understand how the script is running longer than 30 seconds, because "max_execution_time" in my php.ini is set to 30 seconds?  Below is my script. Thank you in advance for any insight you can offer!

<?php

function getData($theTable, $theSymbol) {
	
$theKey = 'ourApiKey';
include('db.php');

// Get the csv file from Alphavantage and parse the data, starting at row 2. (Row one contains the column names)
$theFile = 'https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=' . $theSymbol . '&interval=15min&apikey=$theKey&datatype=csv';

if($theFile) {

$con = mysqli_connect($dbhost,$dbuser,$dbpass,$dbname);

$start_row = 2; //define start row
$end_row = 100; //define start row
$i = 1; 

Open in new window

0
we patched SUSE Linux 11 SP4.  when we start apache it didn't work so we had to roll back apache2-mod_kj.

I like to upgrade this package as per security issue. How can I fix the issue?

apache2-mod_jk-1.2.26-1.30.110
0
Hi All,

I have a web server that needs to host 2 SSL certs that will use 1 public IP address

I have added the certs to the server and added a new entry to the ssl.conf file

<VirtualHost *:443>
 #ServerName www.XXXXXXXX.com
 #DocumentRoot /var/www/site2
 SSLEngine on
 SSLCertificateFile /etc/httpd/conf/ssl.crt/XXXXXXXXX.crt
 SSLCertificateKeyFile /etc/httpd/conf/ssl.key/XXXXXXXXkey
 SSLCACertificateFile /etc/httpd/conf/ssl.crt/XXXXXXXX.crt
</VirtualHost>  

When I restart httpd.conf I get the following message.

Starting httpd: [Wed Nov 15 09:25:05 2017] [warn] _default_ VirtualHost overlap on port 443, the first has precedence

Obviously, it is looking at both certs and as both use Port 443 it goes with the first cert it sees and not the second. What am I missing?

CentOS 6.9
Apache with mod_ssl installed
0
We're getting the following error when the HTTP GET request is large:
mod_jk.log:[Wed Oct 18 11:37:20.232263 2017][12082:139812138850048] ajp_marshal_into_msgb::jk_ajp_common.c (517): failed appending the query string of length 7295

I've found several references to this error and I've tried the following, but it did not work:
1. Added worker.template.max_packet_size=65536 to this file: workers.properties
2. Added packetSize to file /usr/apache-tomcat/conf/server.xml:
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" address="127.0.0.1" protocol="org.apache.coyote.ajp.AjpNioProtocol"
                        socket.directBuffer="true"
                        URIEncoding="UTF-8" redirectPort="8443" packetSize="65536" connectionTimeout="120000"  />
3. Added LimitRequestLine 65536 LimitRequestBody 0 LimitRequestFieldSize 65536 LimitRequestFields 10000 (to /nbsnas/http/conf/httpd.conf file)
Restarted Apache for each of the above and also when all of them are set to the values above.

I'm still getting the same error.
Any ideas/recommendations is greatly appreciated.
thanks!
0
Hi, just wondering if someone is able to help me re-write a URL in the following pattern...

https://url.domain.com/url/ --> https://www.domain.com/url/ 

So if you load the first part it shows the content where the second part is. So it's basically creating a sub-domain that points to a folder, but still shows the folder as part of the URL. (the /url/ at the end)

Thanking you in advance!!
Cheers
0
The application is ehour Time Attendance Management Software

After following the instructions from there website im getting the error below:
HTTP ERROR
I have attached the logs for more reference.

I appreciate if someone with expertise can guide through troubleshooting the error above.

Thanks
catalina.2017-11-13.log
tomcat8-stderr.2017-11-13.log
commons-daemon.2017-11-13.log
localhost_access_log.2017-11-13.txt
manager.2017-11-13.log
0

Apache Web Server

19K

Solutions

14K

Contributors

The Apache HTTP Server is a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. Typically Apache is run on a Unix-like operating system, but it is available for a wide variety of operating systems, including Linux, Novell NetWare, Mac OS-X and Windows. Released under the Apache License, Apache is open-source software.