HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Share tech news, updates, or what's on your mind.

Sign up to Post

I access a website on our internal network with a URL of the form http://192.168.x.y/sitelauncher/sitellauncher.aspx and the page loads fine. Once I take the next step of, say, entering credentials on the page the web request tries to access the same server via its external IP address at http://a.b.c.d/sitelauncher/ourservice.svc which it can't see because that's unavailable from our internal IP and always will be (so I need the request to go out with the IP as 192.168.x.y not a.b.c.d). We're using SilverLight on the client side (so that's on my internal machine as it would have to be on any externally connecting client) and I don't know if that poses any additional problems. I've searched in every config file and possible source for the string "a.b.c.d"  (the webserver's external IP) on the webserver and can't find it and if it's hard-coded it can't be changed (legacy code).

So, basically, I wondered if there was a way I could somehow intercept and replace so all requests are of the form http://192.168.x.y/sitelauncher/sitellauncher.aspx  on the client machine?

Is there some tool available that would permit some kind of string replacement of a.b.c.d to 192.168.x.y on each occasion an http request is made?

Have tried a thing called rinetd which complains because it can't bind to the original IP target:port (no surprise since it'll be invisible).

Any ideas folks?

Learn SQL Server Core 2016
LVL 19
Learn SQL Server Core 2016

This course will introduce you to SQL Server Core 2016, as well as teach you about SSMS, data tools, installation, server configuration, using Management Studio, and writing and executing queries.

We are working on a e-commerce portal that is built on Dot Net.

For faster response and scalability, we have implemented an ARR based Reverse Proxy and Disk Caching. The site is deployed on Windows server 2012 R2 standard & IIS version 8.5.96000. Origin & ARR Reverse Proxy, are on the same server as of now.

This works fine most of the time, except there are intermittent issues which we are unable to solve.

Again, on a staging site everything works well. But on production site with live traffic we are getting an issue of

PR_CONNECT_RESET_ERROR when accessing website.

Check the image here:

This we're unable to find exact step, but we still get this error rarely while browsing. And our visitors are facing the same, as we found our traffic has impacted due to this.
We have deployed SQL Always-on which is consist of two MS-SQL 2017 servers. When we try to connect through always-on listener IP, the connection takes time.

However, its connecting fine using listener DNS name.
Dear EE,

We have WCF client services which are giving an error.

Failed to invoke the service. Possible causes: The service is offline or inaccessible; the client-side configuration does not match the proxy; the existing proxy is invalid. Refer to the stack trace for more detail. You can try to recover by starting a new proxy, restoring to default configuration, or refreshing the service.
An error occurred while receiving the HTTP response to This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.

Server stack trace: 
   at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)
   at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
   at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)

Open in new window

I'm trying to send the value of an EditText to a PHP file.

I need that value to perform a query, and get data.

I don't understand why the value is not passed.

Can someone help me?




$id = $_POST['id'];

$conn=$dbh->prepare('SELECT * FROM ServiziRistorante WHERE id=:id');

$conn->bindParam(':id', $id, PDO::PARAM_STR);


if($conn->rowCount() > 0){
while($row[] = $conn->fetch(PDO :: FETCH_ASSOC)) {

    $tem = $row;

    $json = json_encode($tem);


} else {
 echo "No Results Found.";

echo $json;


Open in new window

private void showServizio() {

        final String id = idristo.getText().toString();

    class ParseJSonDataClass extends AsyncTask<Void, Void, String> {
        public Context context;
        String FinalJSonResult;

        public ParseJSonDataClass(Context context) {

            this.context = context;

        protected void onPreExecute() {



        protected String doInBackground(Void... arg0) {

            HttpServiceClass httpServiceClass = new HttpServiceClass(HttpURLServizi);

            RequestHandler requestHandler = new RequestHandler();

            //creating request parameters
            HashMap<String, String> params = new HashMap<>();
            params.put("id", id);

Open in new window

When IIS is configured to act as a reverse-proxy (converting an HTTP site to HTTPS), is it possible to have it proxy the MD5 authentication request from the HTTP site through to the user via the HTTPS proxied site.
I have several users who cannot load http: pages just https: pages in Windows 7.  This just started last Friday.
I try an  Https  POST  but the response  is empty . i'm with indy  10.5. It was working with indy 9,  after upgrading to indy 10  a response  is empty '';

      TSideStream = class(TIdMultiPartFormDataStream)
    property RequestContentType: string read FRequestContentType write FRequestContentType;
      DataStream := TSideStream.create;
      IdHTTP.ProxyParams.ProxyServer          := '';
    IdHTTP.ProxyParams.ProxyPort            := 0;
    IdHTTP.ProxyParams.BasicAuthentication  := False;
    IdHTTP.ProxyParams.ProxyUsername        := '';
    IdHTTP.ProxyParams.ProxyPassword        := '';

  IdHTTP.Request.ContentLength := -1;
  IdHTTP.Request.ContentRangeEnd := 0;
  IdHTTP.Request.ContentRangeStart := 0;
  IdHTTP.Request.ContentType := 'text/xml';
  IdHTTP.Request.Accept := 'text/xml, */*';
  IdHTTP.Request.BasicAuthentication := false;
  IdHTTP.Request.UserAgent :=  'Mozilla/3.0 (compatible; Indy Library)';
  IdHTTP.HTTPOptions := [hoForceEncodeParams];
        DataStream.AddFormField('WUA_VERSION', '1.0');
        DataStream.AddFormField('MODEWUA', 'ACTION');
        Horodatage := FormatDateTime('yyyy-mm-dd hh:nn:ss', Now);
        DataStream.AddFormField('DATE_LOCAL', Horodatage);
        t1 := '01-63-0100-999999-999-54';
        DataStream.AddFormField( 'AGREMENT', t1);
        DataStream.AddFormField('REMOTE_ID', 'A0-9999-9999-710001-71');
       t2 := 'FR5375009P';
        DataStream.AddFormField('ITR_ID', T2);

Open in new window

I'm trying to connecto to this url, just using a get request.  But always get the error EIdOSSLConnectError "Error connecting with SSL"
  IdSSLIOHandlerSocket : TIdSSLIOHandlerSocket;
url := ''
IdSSLIOHandlerSocket := TIdSSLIOHandlerSocket.Create(nil);
IdHTTP.IOHandler := IdSSLIOHandlerSocket;
          IdSSLIOHandlerSocket.SSLOptions.Method := sslvTLSv1;
          IdSSLIOHandlerSocket.SSLOptions.Mode := sslmUnassigned;
          IdSSLIOHandlerSocket.SSLOptions.VerifyMode := [];
          IdSSLIOHandlerSocket.SSLOptions.VerifyDepth := 0;
          IdSSLIOHandlerSocket.PassThrough := True;
   IdHTTP.Request.ContentType := 'application/json';
    IdHTTP.Request.Accept := 'text/html, */*';
    IdHTTP.Request.BasicAuthentication := False;
    IdHTTP.Request.UserAgent := 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134';
and i have error connecting with ssl, i use indy 9 and open ssl indy_openssl096k that i download from
this works with some links https like but not my URL.

I am practicing on spring boot with gradle project. I am not much familiar with gradle. My project has functional-test to test all endpoints. But I am having one compilation error in my functional-test class. I am attaching the screenshot of the compilation errorCompilation error on MimeType
Compilation error is: "cannot access org.springframework.util.MimeType"

Few observations
1) I imported the package "import org.springframework.http.*"
2) When I try to build the project, I get similar error for MultiValueMap (cannot access org.springframework.util.MultiValueMap)
3) I am using InteliJ
4) Actual project folder is working fine and I am also able to hit the endpoints with expected results.
5) When I saw my included libraries, I see two springboot jars with different versions has been added somehow. Could this be the issue?

Please let me know for further information required

Expert Spotlight: Joe Anderson (DatabaseMX)
LVL 19
Expert Spotlight: Joe Anderson (DatabaseMX)

We’ve posted a new Expert Spotlight!  Joe Anderson (DatabaseMX) has been on Experts Exchange since 2006. Learn more about this database architect, guitar aficionado, and Microsoft MVP.

I have tried to update my WCF service to use https but ran into so many error that I decided to switch back to http before starting the process again (at some point). However now that I have switched back to http I am getting the following error when I try to build the service. This was previously working under http but obviously I broke something when attempting to move to https.

Severity	Code	Description	Project	File	Line	Suppression State
Error		Reference.svcmap: Failed to generate code for the service reference 'ServiceReference1'.  Cannot import wsdl:portType  Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter  Error: Exception has been thrown by the target of an invocation.  XPath to Error Source: //wsdl:definitions[@targetNamespace='']/wsdl:portType[@name='ITransaction']  Cannot import wsdl:binding  Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on.  XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='']/wsdl:portType[@name='ITransaction']  XPath to Error Source: //wsdl:definitions[@targetNamespace='']/wsdl:binding[@name='SOAPendpoint']  Cannot import wsdl:port  Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on.  XPath to wsdl:binding: 

Open in new window

I had a PCI compliance scan done and it came back with a failure for IIS OPTIONS and they said I needed to set to deny it in IIS. Whenever I do that it causes our phone email stop syncing. We are using Exchange 2010 server at the company and Apple Mail app to sync (please don't just suggest I use something else). I have googled until I'm blue in the face and can't find any relationship between OPTIONS method and Apple Mail. Can anyone please help?
I would like to convert HTTP streaming HLS  to UDP or RTP, I  have tried VLC  and it works only in windows 10, does anyone  have any other software example that is validated  and is working properly?

I'm trying to configure Pound Reverse Proxy with a HTTPS connection to a Webserver in the backend. Unfortunately it does not work. If I use unencrypted HTTP, it works. Syslog says:
Jun  8 11:11:39 transfer pound: BIO_do_handshake with XXX.XXX.XXX.XXX:443 failed: error:00000000:lib(0):func(0):reason(0)
openssl s_client -connect says "CONNECTION OK".

The used config part of Pound:

        HeadRemove "X-Forwarded-Proto"
        AddHeader "X-Forwarded-Proto: https"
        Address YYY.YYY.YYY.YYY
        Port    443
        Cert    "/etc/ssl/pound/server.pem"

        ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
        xHTTP           1

                        Address XXX.XXX.XXX.XXX
                        Port    443

I've been surfing the net for several hours with no solution, so I thought "maybe experts exchange can help"?

****** edit #1 a few hours later ******

I sniffed the traffic between the reverse proxy and the https-backend-server. I added a screen capture. It seems that the web server just does not answer, then pound runs into a timeout and closes the connection, but I'm not an expert. I've tried to put pound in front of several web servers, with the same effect. I assume that they dislike something in the "handshake-request-packet", but I have no clue what, because I get no …
My question is about this part of the HTTP/1.1 Caching protocol, see: (Handling a Received Validation Request)

A request containing an If-None-Match header field (Section 3.2 of
[RFC7232]) indicates that the client wants to validate one or more of
its own stored responses in comparison to whichever stored response
is selected by the cache.

When a cache decides to revalidate its own stored responses for a
request that contains an If-None-Match
list of entity-tags, the cache
MAY combine the received list with a list of entity-tags from its own
stored set of responses (fresh or stale) and send the union of the
two lists as a replacement If-None-Match header field value in the
forwarded request.

If the response to the forwarded
request is 304 (Not Modified) and has an ETag header field value with
an entity-tag that is not in the client's list, the cache MUST
generate a 200 (OK) response for the client by reusing its
corresponding stored response, as updated by the 304 response
metadata (Section 4.3.4).

Let's assume we have:

  • A browser cache.
  • A proxy cache.
  • An origin server.

The browser cache contains a stored stale resource with entity-tag "A". The proxy cache contains a stored stale resource with entity-tag "B". The proxy cache can act as a client, and as a server. The entity-tag of the resource on the origin server is also "A". In short:

Open in new window

See: (304 Not Modified)

If the conditional request originated
with an outbound client, such as a user agent with its own cache
sending a conditional GET to a shared proxy, then the proxy SHOULD
forward the 304 response to that client

I don't understand this part of the protocol. The word "forward" implies that the proxy, got the 304 response from somewhere else in the first place. I would think that the proxy creates the 304 response and doesn't forward it? How I have to interpret this quote? Where the 304 response is coming from in the first place?

Imagine you have:

user agent   <->   browser cache   <->   proxy cache   <->   origin server   

Open in new window

In my opinion, this is what will happen:

  1. The user agent initiates the request.
  2. The browser cache adds for example If-None-Match (Etag) to the request.
  3. The proxy cache receives this request.
  4. If the proxy cache contains a valid response, and the entity-tags are the same, then a 304 response will be created by the proxy cache.
  5. The 304 response will be sent to the browser cache and the user agent.

This is not a forward, so where the word "forward" is coming from?

If the origin server would have created the 304 response, then a proxy server can receive and forward this response. However, I don't think you have to see it like that. Imagine the entity-tag …
Hi I am sending an email blast and on email I am connecting each button with userid so if any user who get email click that email then  his click can be recorded. For eg Button click is like this:

previously I was using 3.5 framework and it was working fine there but now after switching to framework 4.0, instead of going to page Email.aspx
its trying to go to, FYI I am doing URL rewriting also.

   How I can fix this problem. For fixing issue I had to remove http tag so when I call page like below then its working

  Please help
In Android 6.0, using SDK 25, should be obeying the system properties http.keepAlive and http.maxConnections?

With netstat, I can verify that in the system there are 2 TCP connections continuously open to my server. These connections appear to be neatly reused for HTTP keep alive and they appear when I start the player, disappear when I stop the player. I am using ExoPlayer for live dash streaming (player is downloading approx 3-5 files every 10 seconds from the same server, audio chunk, video chunk and manifest).

But the underlying system seems to ignore the http.maxConnections (and even http.keepAlive) that I wish to control.

My goal is to set http.maxConnections to 1 and ensure there is exactly 1 HTTP keep alive (TCP) connection open to the server. Any way to accomplish this?

Generally, the pre-check directive is very similar to max-age. However, IE's implementation of max-age takes the Age response header into account. In contrast, the implementation of post-check and pre-check do not. Hence, pre-check is equivalent to max-age only when there is no Age header on the response.

This article is from 2009, so pretty old. How max-age and the Age header are related nowawdays? Or are they not related at all anymore?
C++ 11 Fundamentals
LVL 19
C++ 11 Fundamentals

This course will introduce you to C++ 11 and teach you about syntax fundamentals.

My question is about one specific reason to use max-age over Expires.

See for example:
Although the Expires header is useful, it has some limitations. First, because there’s a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won’t be achieved, and caches might wrongly consider stale content as fresh.

But with max-age you also have the exact same problem, right? In my opinion there are 2 possibilities:

1. A cache receives a response from a server. The cache's clock starts counting from that moment. If there would be a delay between the server sending the response, and the cache receiving the response, then the age would be incorrect. So this is not a good way to calculate the age.

2. A cache receives a response from a server. The age of the response is calculated as a difference between the cache's current date and the Date general header included in the HTTP response.

Case 2 is in my opinion the right way to calculate the age of the response. But the reponse header field "Date" will be determined by the server. Just like "Expires" will be determined by the server. So in both cases the server's clock will be compared with the cache's clock. So in this respect (clock synchronization), I see no difference between max-age and Expires?

With case 1 they would be right, because then the cache's clock on moment A …
When a stored response is used to satisfy a request without validation, my browser is not showing me the HTTP reponse header field: "Age"?

Just take a simple test.html file, which is chacheable by default. Now visit the page 2 times, so the second time the file is shown directly from cache without validation.

Firefox shows me response headers like this:

Date: Mon, 12 Mar 2018 16:05:18 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Mon, 12 Mar 2018 12:24:12 GMT
Accept-Ranges: bytes
Content-Length: 143
Content-Type: text/html

Open in new window

But why Firefox does not show me the "Age" header field?


When a stored response is used to satisfy a request without validation, a cache MUST generate an Age header field

And see:

However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age.

A browser's cache is not HTTP/1.0, so the response headers must contain an Age header field. Firefox is not showing me "Age"?

Are browsers only showing the response and request headers of the server? But if that's the case then they had to show no response headers at all, because there was no response from server in case of "200 OK (cached)"?

So I don't understand this? What's the logic behind this?

P.S. The example was about Firefox, but for example Chrome is doing the same.

PHP is using:

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Open in new window

I don't understand this 100%. A cache could have a different idea about time. Although it's really rare, a cache could think it's 1980. In a case like that, the cached copy will be seen as fresh.

When using:

Expires: 0

Open in new window

you can avoid problems like that. So in my opinion PHP is choosing the second best solution instead of the best solution.


A cache recipient MUST interpret invalid date formats, especially the
value "0", as representing a time in the past (i.e., "already

So when using the value "0", you know for sure it will be seen as a date in the past. But this is the protocol for HTTP/1.1 (not HTTP/1.0).

I was also searching for some information about HTTP/1.0 and invalid dates, but I could not find an answer. I know HTTP/1.0 CAN implement things from HTTP/1.1.

How HTTP/1.0 caches are dealing with invalide dates? And can I be sure that in all situations "Expires: 0" will be seen as a date in the past? And if no, do you have examples?

I saw Google is using:

Expires: -1

Open in new window

In the past people were setting Expires via html via the meta tag ... in cases like that "-1" could mean something different than "0", but in what kind of situations "Expires: -1" means something different than "Expires: 0" in the http headers?

So what to use? Date in the past, 0 or -1?
I'm trying to understand:

Vary: Accept-Encoding

Open in new window

Let's say we have:

- client 1 (only understands gzip-encoding)
- client 2 (only understands deflate-encoding)
- a shared cache
- a server (supports gzip and deflate encoding / compression, so the server can send the response message body encoded / compressed)
- a resource (1 url, cacheable)

If client 1 first will make a request to the resource, then the response will be stored in cache. The resource is gzip-encoded. If now client 2 will make a request, then the cache will server the gzip-encoded version which client 2 does not understand.

This is what I understand about it from the internet. But this sounds weird to me.

1. The stored reponse in cache must contain: "Content-Encoding: gzip", because when a server will send an encoded response, it will let you know which encoding has been used. So if I would be a cache and I would get a request with "Accept-Encoding: deflate" (or with an empty value). As a cache I know that my stored response is gzip-encoded (because of the stored "Content-Encoding: gzip"). Then I don't need no "Vary: Accept-Encoding" to know that I have to make a new request to the server??

So why "Vary: Accept-Encoding" exists anyway and in what kind of situations it really makes a difference?

2. Are there also caches around, which can decode / encode (gzip / deflate)? In cases like there is also no need to add "Vary: Accept-Encoding", because a cache could decode …
If you have for exampe an image with max-age=31536000, when using HTTPS what is the best to do:

Cache-Control: public, max-age=31536000

Open in new window

Cache-Control: private, max-age=31536000

Open in new window

Cache-Control: max-age=31536000

Open in new window

Which one and why?

I also did some own research, but I'm not sure yet what the answer has to be. I think this is true:

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received.

This is about the cache of the browser. For shared caches I think this is true:

If the request is authenticated or secure (i.e., HTTPS), it won’t be cached by shared caches.

Google is saying here, see:

If the response is marked as "public", then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn't normally cacheable. Most of the time, "public" isn't necessary, because explicit caching information (like "max-age") indicates that the response is cacheable anyway.

That's what Google is saying, but I also checked what they are doing. See:

cache-control:private, max-age=31536000

Open in new window

cache-control:public, max-age=31536000

Open in new window

Reponse headers can contain something like:

Cache-Control: must-revalidate

Open in new window

But "must-revalidate" does not exist for the request headers, see:

Why? Is there a reason behind this?

Take for example me, my browser, my browser's cache and the origin server. Let's say there is a stale cached copy in the browser's cache. Imagine I don't want the cached copy to be served without making any request to the server. Also not if the cache is disconnected from the origin server. I could add must-revalidate in the request headers, but this only exists for the response headers in similar situations.

Why is that and what's behind it? Directives like max-age, no-cache, no-store you have for the response AND the request directives, so why must-revalidate is an exception to that?

HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Top Experts In
HTTP Protocol