HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi Experts,

I'm looking to modify the below to be an update instead of a select.
however not sure where do I put in the body param as specified below
https://howto.caspio.com/web-services-api/rest-api/older-rest-api-versions/table-operations/

Function GetDataFromCASPIO1()
    Dim objHTTP As New WinHttp.WinHttpRequest
    Dim docXML As MSXML2.DOMDocument
    Dim ResponseText As String
    Dim curNode As IXMLDOMNode
    Dim oNodeList As IXMLDOMSelection
    Dim s As String

    Set docXML = New MSXML2.DOMDocument

    Set objHTTP = New WinHttp.WinHttpRequest
    URL = "https://MyAccount.caspio.com/oauth/token"
    
    objHTTP.Open "POST", URL, False
    objHTTP.SetRequestHeader "Content-Type", "application/x-www-form-urlencoded"

    objHTTP.Send 
    ResponseText = Right(objHTTP.ResponseText, Len(objHTTP.ResponseText) - 17)
    access_token = Left(ResponseText, InStr(ResponseText, """") - 1)
    'Debug.Print access_token


    Set objHTTP = New WinHttp.WinHttpRequest

    URL = "https://MyAccount.caspio.com/rest/v1/tables/Skilled_Nursing_Visit_Note/rows?q={""limit"":10000,""where"":""visit_date>=GetDate()-7""}"

    '''objHTTP.Open "GET", URL, False
    objHTTP.Open "Put", URL, False
    objHTTP.SetRequestHeader "Accept", "application/xml"
    objHTTP.SetRequestHeader "Content-Type", "application/json"
    objHTTP.SetRequestHeader "Authorization", "Bearer " + access_token


    objHTTP.Send
    'Debug.Print objHTTP.Status
    'Debug.Print objHTTP.ResponseText
  

Open in new window

0
Free Tool: IP Lookup
LVL 12
Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

I'm trying to configure Pound Reverse Proxy with a HTTPS connection to a Webserver in the backend. Unfortunately it does not work. If I use unencrypted HTTP, it works. Syslog says:
Jun  8 11:11:39 transfer pound: BIO_do_handshake with XXX.XXX.XXX.XXX:443 failed: error:00000000:lib(0):func(0):reason(0)
openssl s_client -connect example.com:443 says "CONNECTION OK".

The used config part of Pound:

 ListenHTTPS
        HeadRemove "X-Forwarded-Proto"
        AddHeader "X-Forwarded-Proto: https"
        Address YYY.YYY.YYY.YYY
        Port    443
        Cert    "/etc/ssl/pound/server.pem"

        ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
        xHTTP           1

        Service
                BackEnd
                        Address XXX.XXX.XXX.XXX
                        Port    443
                        HTTPS
                End
        End

I've been surfing the net for several hours with no solution, so I thought "maybe experts exchange can help"?


****** edit #1 a few hours later ******

I sniffed the traffic between the reverse proxy and the https-backend-server. I added a screen capture. It seems that the web server just does not answer, then pound runs into a timeout and closes the connection, but I'm not an expert. I've tried to put pound in front of several web servers, with the same effect. I assume that they dislike something in the "handshake-request-packet", but I have no clue what, because I get no …
0
My question is about this part of the HTTP/1.1 Caching protocol, see:
https://tools.ietf.org/html/rfc7234#page-17 (Handling a Received Validation Request)

A request containing an If-None-Match header field (Section 3.2 of
[RFC7232]) indicates that the client wants to validate one or more of
its own stored responses in comparison to whichever stored response
is selected by the cache.

When a cache decides to revalidate its own stored responses for a
request that contains an If-None-Match
list of entity-tags, the cache
MAY combine the received list with a list of entity-tags from its own
stored set of responses (fresh or stale) and send the union of the
two lists as a replacement If-None-Match header field value in the
forwarded request.

If the response to the forwarded
request is 304 (Not Modified) and has an ETag header field value with
an entity-tag that is not in the client's list, the cache MUST
generate a 200 (OK) response for the client by reusing its
corresponding stored response, as updated by the 304 response
metadata (Section 4.3.4).

Let's assume we have:

  • A browser cache.
  • A proxy cache.
  • An origin server.

The browser cache contains a stored stale resource with entity-tag "A". The proxy cache contains a stored stale resource with entity-tag "B". The proxy cache can act as a client, and as a server. The entity-tag of the resource on the origin server is also "A". In short:


Open in new window

0
See: https://tools.ietf.org/html/rfc7232#page-19 (304 Not Modified)

If the conditional request originated
with an outbound client, such as a user agent with its own cache
sending a conditional GET to a shared proxy, then the proxy SHOULD
forward the 304 response to that client
.

I don't understand this part of the protocol. The word "forward" implies that the proxy, got the 304 response from somewhere else in the first place. I would think that the proxy creates the 304 response and doesn't forward it? How I have to interpret this quote? Where the 304 response is coming from in the first place?


Imagine you have:

user agent   <->   browser cache   <->   proxy cache   <->   origin server   

Open in new window


In my opinion, this is what will happen:

  1. The user agent initiates the request.
  2. The browser cache adds for example If-None-Match (Etag) to the request.
  3. The proxy cache receives this request.
  4. If the proxy cache contains a valid response, and the entity-tags are the same, then a 304 response will be created by the proxy cache.
  5. The 304 response will be sent to the browser cache and the user agent.

This is not a forward, so where the word "forward" is coming from?

If the origin server would have created the 304 response, then a proxy server can receive and forward this response. However, I don't think you have to see it like that. Imagine the entity-tag …
0
Hi,

Found these meta tags to ensure a page always reloads.

<meta http-equiv="pragma" content="no-cache">
<meta http-equiv="Cache-Control" content="no-cache">
<meta http-equiv="Expires" content="0">

Open in new window


Seen some pages where, despite having them, the cache still is used.

What can you recommend to add to these to be sure that the page arrives fresh each time?

Thanks!

OT
0
Hi I am sending an email blast and on email I am connecting each button with userid so if any user who get email click that email then  his click can be recorded. For eg Button click is like this:
http://www.Testweb.com/Pages/EmailBlast/Email.aspx?Id=1&page=www.Testweb.com/Pharmaceutical_Packaging__Details.aspx

previously I was using 3.5 framework and it was working fine there but now after switching to framework 4.0, instead of going to page Email.aspx
its trying to go to www.Testweb.com/Pharmaceutical_Packaging__Details.aspx, FYI I am doing URL rewriting also.

   How I can fix this problem. For fixing issue I had to remove http tag so when I call page like below then its working
http://www.Testweb.com/Pages/EmailBlast/Email.aspx?Id=1&page=Pharmaceutical_Packaging__Details.aspx


  Please help
0
How do I send a http request to a webserver using post without a web browser.

IE this will come from a script, what does the script look like?

Andy
0
I am trying to connect to a secure VAN and push and pull files over a secure connection.  As we are trying to set it up, the people at the VAN are getting this error.

Error Message:

AS2 Outbound has a HTTP proxy server to connect through.Probably error while connecting to proxy : [Error sending document over http to host [http://X.X.213.93], status code [405], reason [Method Not Allowed], response from server [<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">< html xmlns="http://www.w3.org/1999/xhtml">< head>< meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>< title>405 - HTTP verb used to access this page is not allowed.</title>< style type="text/css">< !-- body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;} fieldset{padding:0 15px 10px 15px;} h1{font-size:2.4em...]]

Summary:

Sending of document [167499846.edi] to [http://X.X.213.93] FAILED

Any ideas of what I am missing or doing wrong?  The X.X.213.93 is the public IP of my machine receiving the files.

Thanks
0
Need to write a script which:
1. Download the json file (curl request)
2. Rename the file to  YYYYMMDDHHMM_lng.xml
3. Push it to a shared folder (for example //192.168.1.1/log)
0
Hi Experts

I am working on a wagtail project(like django-cms) I get this error when I run python3 manage.py runserver 0.0.0.0:8000

 
code 400, message Bad request syntax 
  ('\x16\x03\x01\x00®\x01\x00\x00ª\x03\x03³\x06âP\x97Þ%<Sg\x13Ö×[zE\x96\x15?
  \x96\x00\x1ah')
  You're accessing the development server over HTTPS, but it only supports 
  HTTP.

Open in new window


I had changed SECURE_SSL_REDIRECT=FALSE and tested it still i get this same error. I had disabled cache in chrome.

 I had deactivated chorme caching in registry  by following steps.
Deactivate Chrome Cache in the Registry

Open Registry (Start -> Command -> Regedit)

Search for: HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command

Change the part after ...chrom.exe" to this value: –disable-application- cache –media-cache-size=1 –disk-cache-size=1 — "%1"

Example: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" - disable-application-cache –media-cache-size=1 –disk-cache-size=1 — "%1"

I had also tried disable cache from chrome developer tools network -disable cache.

I had also tried by clearing cache from hsts on chrome.

I had also tried from incognito window on chrome. But still I get the same error

It is an Ubuntu machine on AWS(accessed by putty from windows pc).

I access from outside (windows pc - local pc )through http://54.23x.9x.17:8000 I am not able to resolve this error.


I had tried on some other machine.  I got the following error on console for Linux Ubuntu

it is changing to https instead of http and I got "GET / HTTP/1.1" 301 0 on console window


Please help me in resolving this error.

With Many Thanks, Bharath AK
0
Free Tool: ZipGrep
LVL 12
Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

CNAME Record caused my emails to fail.

I hosted 2 emails at RackSpace for my domain, and verified they worked fine. This required me to update the MX records where I host my domain.
name1@domain1.com
name2@domain1.com


Fine.

Then, before I could create a website, I was allowed by my partner to point my domain at his website.

The redirect worked once I created a CNAME record.
domain1.com brings up the site at domain2.com

But the emails stopped working.

The error is "553 Relaying disallowed"

I called RackSpace and they said it's because the MX records at the domain2.com, where domain1.com points at, are pointing to Zoho (a major email provider).

So, my question is...

If I move my two emails from RackSpace to Zoho, can I have a solution where

1) domain forwarding is working (domain1.com brings up the site at domain2.com)
2) emails work as well (both (domain1.com emails work again)

?

Thanks.

Or, is there another way to configure this?

Thanks
0
In Android 6.0, using SDK 25, should java.net.HttpURLConnection be obeying the system properties http.keepAlive and http.maxConnections?

With netstat, I can verify that in the system there are 2 TCP connections continuously open to my server. These connections appear to be neatly reused for HTTP keep alive and they appear when I start the player, disappear when I stop the player. I am using ExoPlayer for live dash streaming (player is downloading approx 3-5 files every 10 seconds from the same server, audio chunk, video chunk and manifest).

But the underlying system seems to ignore the http.maxConnections (and even http.keepAlive) that I wish to control.

My goal is to set http.maxConnections to 1 and ensure there is exactly 1 HTTP keep alive (TCP) connection open to the server. Any way to accomplish this?
0
See: https://blogs.msdn.microsoft.com/ieinternals/2009/07/20/internet-explorers-cache-control-extensions/

Generally, the pre-check directive is very similar to max-age. However, IE's implementation of max-age takes the Age response header into account. In contrast, the implementation of post-check and pre-check do not. Hence, pre-check is equivalent to max-age only when there is no Age header on the response.

This article is from 2009, so pretty old. How max-age and the Age header are related nowawdays? Or are they not related at all anymore?
0
Why can I not update the URL in the browser?

I update QueryString and remove a malicious parameter. But, after executing the following code:

                filterContext.HttpContext.RewritePath(filterContext.HttpContext.Request.Path,
                                                       filterContext.HttpContext.Request.PathInfo,
                                                       filterContext.HttpContext.Request.QueryString.ToString());

Still see that bad domain.

I may be fighting development automation inside my own project;

I paste the following into the browser...

http://SENB-0186.mydomain.org/ContentManagement/?
goto=http%3a%2f%2fsenb-0186.mydomain.org.evil.com%3a80%2fContentManagement%2f

My code captures the goto parameter and removes it from the QueryString. I call the RewritePath() function above, and see the following in the browser


https://dev.nim.mydomain.org/IdentityServices//?return=http://senb-0186.mydomain.org
/ContentManagement/Account/LogOn?ReturnUrl=%2fContentManagement%2f%3fgoto%3dhttp%253a%252f%252f
senb-0186.mydomain.org.evil.com%253a80%252fContentManagement%252f&u
gid=040dec88-8a99-4410-bf72-1d868a207c8d

I have no problem with the introduction of https://dev.nim.mydomain.org/IdentityServices//?
but I do have a problem that the  mydomain.org.evil.com sub-domain re-appears.

Suggestions?

I even created a copy of QueryString, made the deletions on the sanitized version. But that also fails.

                …
0
My question is about one specific reason to use max-age over Expires.

See for example: https://www.mnot.net/cache_docs/#EXPIRES
Although the Expires header is useful, it has some limitations. First, because there’s a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won’t be achieved, and caches might wrongly consider stale content as fresh.

But with max-age you also have the exact same problem, right? In my opinion there are 2 possibilities:

1. A cache receives a response from a server. The cache's clock starts counting from that moment. If there would be a delay between the server sending the response, and the cache receiving the response, then the age would be incorrect. So this is not a good way to calculate the age.

2. A cache receives a response from a server. The age of the response is calculated as a difference between the cache's current date and the Date general header included in the HTTP response.

Case 2 is in my opinion the right way to calculate the age of the response. But the reponse header field "Date" will be determined by the server. Just like "Expires" will be determined by the server. So in both cases the server's clock will be compared with the cache's clock. So in this respect (clock synchronization), I see no difference between max-age and Expires?

With case 1 they would be right, because then the cache's clock on moment A …
0
Need to Redirect after removing one or more query string params.


I am using a whitelist to remove dangerous query string params, and when done, need to redirect to whatever is left in the  query string.

I understand things may break, but am okay letting our website's existing default behavior handle it.

What is the exact command to redirect?

ActionExecutingContext filterContext is the input param of the ActionFilterAttribute

        public override void OnActionExecuting(ActionExecutingContext filterContext)

and after removing the faulty query string params from:

filterContext.HttpContext.Request.QueryString

I am ready to redirect.

                filterContext.HttpContext.Response.Redirect(filterContext.HttpContext.Request.);

Please complete the the above parameter for Redirect()

Thanks
0
When a stored response is used to satisfy a request without validation, my browser is not showing me the HTTP reponse header field: "Age"?

Just take a simple test.html file, which is chacheable by default. Now visit the page 2 times, so the second time the file is shown directly from cache without validation.

Firefox shows me response headers like this:

Date: Mon, 12 Mar 2018 16:05:18 GMT
Server: Apache/2.4.17 (Unix) OpenSSL/1.0.1e-fips PHP/5.6.16
Last-Modified: Mon, 12 Mar 2018 12:24:12 GMT
Accept-Ranges: bytes
Content-Length: 143
Content-Type: text/html

Open in new window


But why Firefox does not show me the "Age" header field?

See: https://tools.ietf.org/html/rfc7234

When a stored response is used to satisfy a request without validation, a cache MUST generate an Age header field

And see: https://tools.ietf.org/html/rfc7234#section-5.1

However, lack of an Age header field does not imply the origin was contacted, since the response might have been received from an HTTP/1.0 cache that does not implement Age.

A browser's cache is not HTTP/1.0, so the response headers must contain an Age header field. Firefox is not showing me "Age"?

Are browsers only showing the response and request headers of the server? But if that's the case then they had to show no response headers at all, because there was no response from server in case of "200 OK (cached)"?

So I don't understand this? What's the logic behind this?

P.S. The example was about Firefox, but for example Chrome is doing the same.
0
See: http://php.net/manual/en/function.session-cache-limiter.php

PHP is using:

Expires: Thu, 19 Nov 1981 08:52:00 GMT

Open in new window


I don't understand this 100%. A cache could have a different idea about time. Although it's really rare, a cache could think it's 1980. In a case like that, the cached copy will be seen as fresh.

When using:

Expires: 0

Open in new window


you can avoid problems like that. So in my opinion PHP is choosing the second best solution instead of the best solution.

See: https://tools.ietf.org/html/rfc7234#section-5.3

A cache recipient MUST interpret invalid date formats, especially the
value "0", as representing a time in the past (i.e., "already
expired").

So when using the value "0", you know for sure it will be seen as a date in the past. But this is the protocol for HTTP/1.1 (not HTTP/1.0).

I was also searching for some information about HTTP/1.0 and invalid dates, but I could not find an answer. I know HTTP/1.0 CAN implement things from HTTP/1.1.

How HTTP/1.0 caches are dealing with invalide dates? And can I be sure that in all situations "Expires: 0" will be seen as a date in the past? And if no, do you have examples?

I saw Google is using:

Expires: -1

Open in new window


In the past people were setting Expires via html via the meta tag ... in cases like that "-1" could mean something different than "0", but in what kind of situations "Expires: -1" means something different than "Expires: 0" in the http headers?

So what to use? Date in the past, 0 or -1?
0
I'm trying to understand:

Vary: Accept-Encoding

Open in new window


Let's say we have:

- client 1 (only understands gzip-encoding)
- client 2 (only understands deflate-encoding)
- a shared cache
- a server (supports gzip and deflate encoding / compression, so the server can send the response message body encoded / compressed)
- a resource (1 url, cacheable)

If client 1 first will make a request to the resource, then the response will be stored in cache. The resource is gzip-encoded. If now client 2 will make a request, then the cache will server the gzip-encoded version which client 2 does not understand.

This is what I understand about it from the internet. But this sounds weird to me.

1. The stored reponse in cache must contain: "Content-Encoding: gzip", because when a server will send an encoded response, it will let you know which encoding has been used. So if I would be a cache and I would get a request with "Accept-Encoding: deflate" (or with an empty value). As a cache I know that my stored response is gzip-encoded (because of the stored "Content-Encoding: gzip"). Then I don't need no "Vary: Accept-Encoding" to know that I have to make a new request to the server??

So why "Vary: Accept-Encoding" exists anyway and in what kind of situations it really makes a difference?

2. Are there also caches around, which can decode / encode (gzip / deflate)? In cases like there is also no need to add "Vary: Accept-Encoding", because a cache could decode …
0
Free Tool: Port Scanner
LVL 12
Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

If you have for exampe an image with max-age=31536000, when using HTTPS what is the best to do:

1:
Cache-Control: public, max-age=31536000

Open in new window


2:
Cache-Control: private, max-age=31536000

Open in new window


3:
Cache-Control: max-age=31536000

Open in new window


Which one and why?


I also did some own research, but I'm not sure yet what the answer has to be. I think this is true:

By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received.

This is about the cache of the browser. For shared caches I think this is true:

If the request is authenticated or secure (i.e., HTTPS), it won’t be cached by shared caches.

Google is saying here, see: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching

If the response is marked as "public", then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn't normally cacheable. Most of the time, "public" isn't necessary, because explicit caching information (like "max-age") indicates that the response is cacheable anyway.

That's what Google is saying, but I also checked what they are doing. See:

Example: https://www.google.nl/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png
cache-control:private, max-age=31536000

Open in new window


Example: https://www.google.com/textinputassistant/tia.png
cache-control:public, max-age=31536000

Open in new window

0
Reponse headers can contain something like:

Cache-Control: must-revalidate

Open in new window


But "must-revalidate" does not exist for the request headers, see:

https://tools.ietf.org/html/rfc7234.html#section-5.2.1

Why? Is there a reason behind this?

Take for example me, my browser, my browser's cache and the origin server. Let's say there is a stale cached copy in the browser's cache. Imagine I don't want the cached copy to be served without making any request to the server. Also not if the cache is disconnected from the origin server. I could add must-revalidate in the request headers, but this only exists for the response headers in similar situations.

Why is that and what's behind it? Directives like max-age, no-cache, no-store you have for the response AND the request directives, so why must-revalidate is an exception to that?
0
Let's first take a look at the definitions.

1. Max-age in request headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.1

The "max-age" request directive indicates that the client is
unwilling to accept a response whose age is greater than the
specified number of seconds.  Unless the max-stale request directive
is also present, the client is not willing to accept a stale
response.

2. Max-age in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#page-26

The "max-age" response directive indicates that the response is to be
considered stale after its age is greater than the specified number
of seconds.

And see: https://tools.ietf.org/html/rfc7234#section-4.2.4

A cache MUST NOT send stale responses unless it is disconnected
(i.e., it cannot contact the origin server or otherwise find a
forward path)

So is it true that "max-age=0" in the response headers is NOT equivalent to "no-cache" in the reponse headers (because of case disconnected), BUT "max-age=0" in the request headers IS equivalent to "no-cache" in the reponse headers?

3. No-cache in the request headers:
See: https://tools.ietf.org/html/rfc7234.html#page-23

The "no-cache" request directive indicates that a cache MUST NOT use
a stored response to satisfy the request without successful
validation on the origin server.

4. No-cache in the response headers:
See: https://tools.ietf.org/html/rfc7234.html#section-5.2.2</a>
0
I have a web application built in vb.net. The home page is "main.aspx". I have x3 other websites that are directing traffic to the this website...the redirects are pointed to the main.aspx page. How can I capture which site the traffic is coming from? I'm thinking something with the headers. I would really like to capture the referring url or something that is unique.
0
Dear sirs,
I am setting up a global exception handler in Spring. Once an exception is caught, a method in the @ControllerAdvice is called in, and returns a specific view with exception details. Among info returned with the view, I have the HttpStatus, that can get. Actually I keep getting 200, while the right HttpStatus should be 500, 404, etc.
Here is my code
@ControllerAdvice
@Slf4j
public class AppGlobalExceptionHandler {
      /*
       * Note: You can either point to Exception Types or Http ResponseStatus
       * @ExceptionHandler({MyException.class})
       * public String ...
       *
       * @ExceptionHandler
       * @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
       * public String
       * */
      
      @ExceptionHandler
      public String handleAnyException(Exception exception, Model model,
                  HttpServletRequest request, HttpServletResponse response) {
            
            log.error("Request raised " + exception.getClass().getSimpleName());
            
            String details = ExceptionUtils.getStackTrace(exception);
            
            AppError error = new AppError();
            error.setStatus(response.getStatus());
            error.setUrl(String.valueOf(request.getRequestURL()));
            error.setMessage(exception.getMessage());
            error.setDetails(details);
            
            model.addAttribute(AppStringHandler.VARIABLE_URL_SUBMIT, AppStringHandler.URL_TASKS_SEND_EMAIL_500);
            model.addAttribute(AppStringHandler.VARIABLE_URL_REDIRECT, AppStringHandler.URL_ADMIN_REDIRECT);
            model.addAttribute(AppStringHandler.VIEW_ERROR_MODEL_ATTRIBUTE, error);
            
            …
0
I have Tomcat 8.0.35 running, web application running from IntelliJ IDEA 2017.3.3 (build successful and WAR deployed) but the browser (I tried with Chrome, Firefox, and IE) is showing HTTP Status 404 when I try to access http://localhost:8080/ or http://127.0.0.1:8080/.

I built my web application with Maven as WAR. The WAR file was successfully deployed in Tomcat's "/webapps/ROOT" directory and "index.html" and "index.jsp" are there and at the end of the deployment the browser page is open.

Tomat is configured to default ports: 8080, 8081, 8009.
localhost-8080---WAR-artifact-is-dep.PNG
localhost-8080---http-status-404.PNG
0

HTTP Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. HTTP functions as a request-response protocol in the client-server computing model. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite; it presumes an underlying and reliable transport layer protocol.

Top Experts In
HTTP Protocol
<
Monthly
>