In the previous chapter, we made a simple request to http://localhost:3000
with cURL
and received the following output on the terminal:
$ curl http://localhost:3000 -v
* Trying 127.0.0.1:3000...
* Connected to localhost (127.0.0.1) port 3000 (#0)
→ GET / HTTP/1.1
→ Host: localhost:3000
→ User-Agent: curl/7.87.0
→ Accept: */*
→
* Mark bundle as not supporting multiuse
← HTTP/1.1 200 OK
← Date: Wed, 23 Aug 2023 13:13:32 GMT
← Connection: keep-alive
← Keep-Alive: timeout=5
← Content-Length: 11
←
* Connection #0 to host localhost left intact
Hello world%
Our focus of this chapter will be the first two lines of the request:
→ GET / HTTP/1.1
→ Host: localhost:3000
This starts with GET
which is a HTTP method or commonly called as HTTP verb, because they describe an action. These HTTP methods are an essential part of the communication process between client applications, such as web browsers, and web servers. They provide a structured and standardized way for client applications to interact with web servers, ensuring that communication is clear and concise.
HTTP methods offer a set of instructions for the server to carry out, which can include retrieving a resource, submitting data, or deleting a resource. These methods are essential for ensuring that client applications can perform a variety of tasks in a streamlined and efficient way.
The most commonly used HTTP methods include GET
, POST
, PUT
, DELETE
, and PATCH
, each with a specific purpose and set of rules. For example, the GET
method is used to retrieve data from a server, while the POST
method is used to submit data to a server for processing. By following these rules, client applications can communicate effectively with web servers, ensuring that data is transmitted correctly and that the server can respond appropriately.
The most basic example of the GET
method is typing any URL in the browser and pressing enter. You're looking to GET
the contents of a certain website.
Then, the browser sends a request to the server that hosts the website, asking for the content you want to see. The server processes the request and sends a response back to your browser with the content you requested.
The GET
method helps digital content (like text, pictures, HTML pages, stylesheets and sounds) appear on your web browser.
When a client sends a GET request to a server, the server should only return data to the client and not modify or change its state. This means that the server should not alter any data on the server-side or perform any actions that could modify the system's state in any way.
This is important to ensure that the data requested by the client is accurate and consistent with the server-side data. By following this protocol, developers can ensure that their applications function smoothly and without any unexpected side effects that could cause problems in the future.
Note: It's technically possible to modify anything on a GET request from the server side, but it's a bad practice and goes against standard HTTP protocol rules. Stick to the REST API guidelines for good API design.
The POST
method serves a different purpose compared to the GET
method. While GET
is used for retrieving data, POST
is employed when you want to send data to a server, to create a new resource on the server.
Imagine you're filling out a form on a website to create a new account. When you submit the form, the website sends a POST
HTTP request to send the information you provided (such as your username, password, and email) to the server. The server processes this data, typically validating it, and if everything is fine - create a new user account.
Just like with GET
, it's crucial to follow proper HTTP API guidelines when using the POST
method. Ensuring that your POST
requests only create data and don't have unintended side effects helps maintain the integrity of your application and the server's data.
Note: Again, it is technically possible to use the
POST
method to retrieve data from a server, or update data, it's considered non-standard and goes against conventional HTTP practices.
The PUT method is used to change or replace data that already exists on the server, completely. It's important to remember that, according to HTTP API guidelines, the PUT request should not be used to update only part of the content.
You may ask, aren't POST
and PUT
same as they do the same thing - CREATE
data if it doesn't exist? There's a difference.
The main difference between the PUT
and POST
methods is that PUT is considered idempotent. This means that calling it once or multiple times in a row will result in the same outcome, without any additional side effects.
In contrast, successive identical POST requests may have additional effects, such as creating multiple instances of a resource or submitting duplicate orders.
The HEAD
method shares a resemblance to the GET
method in that it retrieves information from the server, but with one fundamental difference: it only retrieves the metadata associated with a resource, not the resource's actual content.
Metadata can include details like the resource's size, modification timestamp, content-type, and more.
When a HEAD
request is made to a server, the server processes the request in a manner similar to a GET
request, but instead of sending back the full content, it responds with just the metadata in the HTTP headers. This can be useful when a client needs information about a resource without the need to transfer the entire content, which can save bandwidth and improve performance.
For example, imagine a web page that lists links to various files for download. When a user clicks on a link, the client can initiate a HEAD
request to gather information about the file, such as its size, without actually downloading the file itself.
By using the HEAD
method when needed, we can optimize data retrieval, minimize unnecessary network traffic, and obtain crucial resource information efficiently and quickly.
It's worth noting that servers are not required to support the
HEAD
method, but when they do, they should ensure that the metadata provided in the response headers accurately reflects the current state of the resource.
The DELETE
method is used when you need to remove or delete a specific resource from the server. When using the DELETE
method, it's important to know that the action is idempotent. This means that no matter how many times you execute the same DELETE
request, the outcome remains the same – the targeted resource is removed.
We should be careful when using the DELETE method, as it's purpose is to permanently remove the resource from the server. To avoid accidental or unauthorized deletions, it's recommended to use proper authentication and authorization mechanisms.
After successfully deleting a resource, the server may respond with a 204 No Content
status code to indicate that the resource has been removed, or a 404 Not Found status code if the resource was not found on the server.
If you only need to remove part of something and not the whole thing, using the DELETE
method might not be the best idea. To make things clear and well-organized, it's better to identify each part as a separate resource, like with the PATCH
method for partial updates.
The PATCH
method is used to partially update an existing resource on the server. PATCH
is different from the PUT
method, which replaces the entire resource. With PATCH
, you can modify only the specific parts of the resource's representation that you want to change.
So, if you're looking to update a document, or update a user, use the PATCH
method.
PATCH requests provide instructions to the server on how to modify a resource. These instructions may include adding, modifying, or removing fields or attributes. The server executes the instructions and makes the changes to the resource accordingly.
Method Name | Description |
---|---|
GET | Transfers a current representation of the target resource to the client. |
HEAD | Same as GET, but doesn't transfer the response content - only the metadata. |
POST | Creates new data. |
PUT | Replaces the current representation of the target resource with the request content, if not found - creates one. |
DELETE | Removes all current representations of the target resource. |
PATCH | Modifies/Updates data partially. |
There are other HTTP methods as well, but these are the most commonly used ones, and would suffice for most use-cases.
In the next chapter, we'll take a look at status codes.
The second component of the first line in the cURL
output was /
:
GET / HTTP/1.1
Let us take a moment to understand this
When it comes to browsing the internet, URLs like "http://localhost:3000/" or "http://localhost:3000" may appear to be simple at first glance, but these seemingly straightforward addresses actually hold a wealth of information that helps our browser locate specific resources on the web.
When we see "http://localhost:3000", it essentially means "http://localhost:3000/".
The trailing "/" serves as the root directory of a web server. Essentially, it points to the primary entry point or homepage of a website hosted at that specific address.
Think about these trailing slashes as separators - something that separates multiple different things.
For example, imagine you're in a library that houses various genres of books. The main entrance, equivalent to the root directory in a URL, guides you to these different sections. In the online world, the trailing slash "/" acts as a digital separator, ensuring that your browser understands the distinct paths to different resources, just like you'd follow separate signs to reach the Fiction, Science, or History sections in a library.
Let's illustrate this concept with an example involving a game website. Consider you're visiting a gaming platform at "http://games.com". Without any additional path, it's like arriving at the game site's main entrance. The trailing slash, though often omitted, signifies the root directory and helps organize the content.
Now, imagine you're interested in exploring racing games. You'd add a path to your URL: "http://games.com/racing". This is like following the sign to the Racing Games section in the library. The trailing slash, in this case, clarifies that you're entering a specific subdirectory for racing games.
Now, let's say you want to access a particular game, "VelocyX." You'd extend the URL to "http://games.com/racing/velocyx". This is similar to walking further into the Racing Games section to find the specific "VelocyX" game. The trailing slash here isn't necessary since you're indicating a specific resource (the game) rather than a directory.
Now, coming back to the main output from cURL
: the /
in GET / HTTP/1.1
means we're trying to get to the base path. That is what we specified when we made a cURL
request:
curl http://localhost:3000 -v
# is same as
curl http://localhost:3000/ -v
Let's talk about the final component in the first line of the cURL
response: HTTP/1.1
Let's go a little bit back than HTTP/1.1
to understand why do we have HTTP/1.1
in the first place.
The initial version of HTTP was very basic and didn't have a version number. It was later given the name "HTTP/0.9" to distinguish it from its successors. In its simplicity, HTTP/0.9 requests were composed of only one line and started with only one method, which was the GET
method. And that was the reason it was called a One-line protocol.
The path to the resource followed the method. The full URL wasn't included in the request since once connected to the server, the protocol, server, and port were not necessary. Despite its simplicity, HTTP/0.9
was a groundbreaking protocol that paved the way for the development of more advanced versions in the future.
There were no request/response headers, as well as no status codes. That doesn't sound too fun, is it?
HTTP version 1.0 was a big improvement over HTTP 0.9. It added new features like metadata transmission through HTTP headers, explicit versioning, status codes that made it clear what happened with requests, the Content-Type field that allowed for different types of documents to be sent, and new methods that let clients and servers interact in more ways. These changes made HTTP 1.0 stronger and more flexible, which set the stage for even more improvements in later versions.
Let's talk about those in a bit more detail:
The HTTP/1.0
version brought significant improvements to the protocol, and one of the most pivotal enhancements was the introduction of the HTTP header.
In HTTP/0.9
, HTTP requests were composed solely of a method (like GET
) and the resource name (like /about.html
). However, HTTP 1.0 expanded this approach by allowing for the inclusion of additional metadata and information alongside the request through the concept of the HTTP header. The HTTP header brought flexibility and extensibility to the protocol, enabling servers and clients to exchange important details about the request.
We'll talk about headers in an upcoming chapter. For now think of it as an extra meta-data.
Versioning became explicit, allowing each HTTP request sent to the server to include version information indicating that the request was being made using HTTP/1.0
. This addition of version information in the request enabled more precise communication between clients and servers, ensuring compatibility and accurate interpretation of the request, whereas in HTTP/0.9
, there was no way to explicitly indicate which version of the HTTP protocol was being used.
HTTP/1.0
implemented status codes in response messages to inform clients about the outcome of their requested operation. These standardized codes indicated whether the request was successful, an error occurred, or further action was required. This enabled clients to better understand the status of their requests without solely relying on the response content.
Since HTTP/1.1
is a successor to HTTP/1.0
, it also has this concept of status codes. In our cURL
example, you can see the status code on the very first line of the response
sent by the server:
* Mark bundle as not supporting multiuse
←< HTTP/1.1 200 OK # 200 is the Status Code, which means OK
It also introduced a very important concept in the form of the HTTP header, specifically the Content-Type
field. This field allowed servers to inform clients about the type of content being sent in the response, which was very helpful for transmitting various document types beyond just plain HTML files.
With this addition, it became easier to send images, videos, audio files, and more types of docs from the server, so the client could accurately interpret the received content and render it appropriately based on its type.
You'll notice that we don't have the Content-Type
header in our cURL
request, and we'll get back to that in an upcoming chapter.
Two new HTTP methods: POST
and HEAD
were introduced as well. We talked about these methods in the previous section. This was particularly useful for checking things like the last modification date or the size of a resource before deciding whether to fully download it.
HTTP 1.1 was the last version in the HTTP 1 series. After that, we got newer versions called HTTP/2
and HTTP/3
. We'll talk a very little about HTTP/3
in this book, and will keep our focus towards HTTP/2
and HTTP/1.1
.
HTTP 1.1 made important improvements to the way web pages or clients communicate with servers. This was done by making it possible to keep a connection open and send multiple requests at once, as well as breaking up large pieces of data into smaller chunks. It also made it easier to store web page data on your computer, and added more ways for web pages to interact with servers. These changes helped make the web faster, more reliable, and more flexible.
Let's talk about the improvements in a bit more detail:
HTTP/1.1
introduced persistent connections, also called keep-alive connections, which allowed multiple requests and responses to be sent and received over a single connection, unlike HTTP 1.0 which required a new connection for each request-response pair. This reduced the overhead of establishing new connections for each resource, leading to faster and more efficient communication between clients and servers.
You may ask, what overhead did "creating new connections" cause? I'll try my best to explain this.
-
Opening a new connection actually takes time, and system resources. With
HTTP/1.0
, a new connection was opened for each resource request. Establishing a new connection requires several steps, including setting up a network connection, negotiating parameters, and implementing security measures if necessary. Once the request is fulfilled, the connection is closed.However, frequently opening and closing connections for each resource can result in inefficiencies due to the overhead involved in setting up and tearing down connections.
-
Connection limits. To prevent issues with opening too many connections, web browsers and servers limit the number of simultaneous connections that can be open at a time. This helps prevent overloading the server and ensures that the browser can handle the responses efficiently.
For example, in
HTTP/1.1
, browsers usually limit the number of simultaneous connections to a single domain to 6 to 8 connections. Opening too many connections can slow down performance and even cause crashes. These are useful when you want to downloadCSS
files, or multiple images at once. Browsers limit the amount of open connections you could have.Limiting the number of connections encourages responsible resource usage and helps maintain a smooth browsing experience for users.
HTTP/1.1
introduced persistent connections, which send multiple requests and responses over the same connection, reducing the need to constantly open and close connections. This improves the efficiency of web communication and reduces the impact of connection limits on performance.HTTP/2
andHTTP/3
tackle this issue even further with multiplexing and other techniques that allow multiple resources to be fetched over a single connection, minimizing the impact of connection limits and reducing latency even more.
This feature enables clients to send multiple requests to the server without waiting for the responses to previous requests. The server processes the requests in the order they were received and sends back the responses in the same order. Pipelining reduces communication latency by eliminating the need for the client to wait for each response before sending the next request.
We'll understand and implement pipelining during the final chapters of building our framework.
This allowed servers to send large amounts of data in smaller, manageable chunks. This was especially useful for dynamic content or cases where the total content length was unknown at the beginning of the response. It does so by sending a series of buffers with specific lengths. This makes it possible for the sender to maintain connection persistence and for the recipient to know when it has received the complete message.
This was a game changing addition. It introduced the mandatory Host
header in the request. This header specifies the domain name of the server the client wants to communicate with. This enhancement paved the way for the hosting of multiple websites on a single IP address (virtual hosting), as servers could now distinguish between different websites based on the Host header.
Prior to HTTP/1.1
, the original HTTP/1.0
protocol did not include the "Host" header, which meant that a single web server using a specific IP address and port number could only serve a single website. This limitation posed a significant challenge as the web began to grow rapidly.
The "Host" header made it possible for a web browser to tell a web server which website it wanted to access as part of an HTTP request. This was important because it allowed many websites to be hosted on a single server. Here are some benefits:
- Virtual Hosting: A single physical server could be used to host multiple websites with different domain names. This was not possible before the "Host" header because the server couldn't determine which site the client wanted to access if multiple domain names were associated with the same IP address. With the "Host" header, the server could inspect the requested domain name and serve the appropriate content.
- Resource Efficiency: Hosting multiple websites on a single server made more efficient use of resources. Without the "Host" header, separate servers or IP addresses would have been required for each hosted website, leading to wastage of resources.
- Cost Savings: Hosting multiple websites on a single server reduced the cost of infrastructure, as fewer servers and IP addresses were needed. This was particularly important for smaller businesses and individuals who couldn't afford dedicated infrastructure for each website.
- Scalability: The "Host" header made it easy to add new websites without needing additional physical hardware, allowing for scalable web hosting solutions.
- Easier Website Management: Website owners and administrators could now manage multiple websites from a single server, simplifying maintenance and updates.
- Cloud Hosting: The "Host" header helped develop cloud hosting platforms where virtualization allowed even more efficient use of resources across multiple clients and websites.
- Domain-based Routing: The "Host" header also allowed for more advanced routing and load balancing strategies, as servers could make routing decisions based on the requested domain.
HTTP/1.1
refined the caching mechanisms, allowing more granular control over caching strategies. It introduced headers like Cache-Control
, ETag
, and If-None-Match
, enabling servers and clients to communicate about cache validity and freshness. This led to improved cache management and reduced redundant data transfers.
Here's how:
Cache-Control Header
: The Cache-Control header let servers and clients communicate about caching behavior of objects like images/html files etc. These instructions included:public
: Says the response can be cached by any intermediary (like a proxy server) and the client.private
: Says the response is meant for a single user and should not be cached by intermediaries.max-age
: Sets the maximum time a resource should be fresh in the cache.no-cache
: Says the resource shouldn't be used from the cache without checking with the server.no-store
: Tells caches, including browser caches, not to store a copy of the response under any circumstances.
ETag Header
: TheETag
(Entity Tag) header gave servers a way to identify versions of a resource. When a client cached a resource, it stored theETag
for that version. When the client wanted to check if its cached version was still valid, it could send theETag
in a request'sIf-None-Match
header. If the server found that the ETag was still valid, it could respond with a "304 Not Modified" status, showing that the client's cached version is still fresh.If-None-Match
Header: Like we just discussed, clients could use theIf-None-Match
header to give the server theETag
of a cached resource. The server would then compare the provided ETag with the current ETag of the resource. If they matched, the server could respond with a "304 Not Modified" status, saving bandwidth and time by not transferring the full resource.
These caching improvements had several benefits:
- Reduced Data Transfers: The ability to use
If-None-Match
and receive "304 Not Modified" status code meant that clients didn't have to download resources they already had cached when those resources hadn't changed on the server. This saved both bandwidth and time. - Faster Page Load Times: Caching mechanisms reduced the need to retrieve resources from the server for every request. This led to faster loading times for websites as clients could use cached resources for subsequent visits.
- Efficient Use of Network Resources: By controlling how resources were cached and for how long, servers could use their network resources more efficiently and reduce the load on both their infrastructure and the clients'.
- Dynamic Content Handling: The ETag mechanism also allowed efficient caching for dynamically generated content. If the content of a page changed, the server could change the ETag, prompting clients to fetch the new content.
HTTP 1.1 introduced a feature called "range requests", which allows clients to request specific parts of a resource instead of the entire resource. This feature has several benefits. This made it possible to continue paused downloads, made it easier to stream media without interruptions, and saved internet data by allowing users to only download specific parts of a file. This improvement greatly enhanced the user's experience when downloading large files or streaming media on the internet. In short, here are the major benefits that Range Requests
provided:
Resuming Interrupted Downloads: If your download is interrupted, range requests allow you to request only the missing parts, so you can resume the download from where it left off instead of starting from scratch.
Streaming Media Content: It enabled efficient media streaming by allowing clients to request parts of a video as needed. This way, the client can buffer and play parts of the video while simultaneously fetching the next parts, leading to smoother streaming.
Bandwidth Conservation: By transmitting only the necessary parts of a resource, range requests help conserve bandwidth and reduce data usage for both the client and server. Imagine having to download the entire 4K youtube video before watching it, how bad would it be? HTTP/1.0
did not have built-in mechanisms to efficiently support streaming or buffering for media content. When you requested a file using HTTP/1.0
, the server would typically send the entire file in response, and the client (browser or media player) would wait until the entire file was received before it could start processing or rendering it
Implementation and Server Response: When a client sends a request with a "Range" header, it specifies the parts it needs from the resource. The server then responds with a "206 Partial Content" response, along with the requested data and additional headers indicating the content's range and length.
We talked about PUT
and DELETE
methods in the previous section. We'll focus on CONNECT
, OPTIONS
and TRACE
HTTP methods.
-
TRACE
: method lets a client ask for a diagnostic loop-back (echo) of the request message. This means it sends back the same request back to the client.TRACE
isn't often used in regular web applications, but it's useful for debugging and troubleshooting. It helps check how an intermediary, like a proxy server, handles and changes the request. It's a way to look at how the request changes as it goes through different nodes in the network. -
OPTIONS
: method lets a client ask the server what it can do. When anOPTIONS
request is sent, the server answers with a list of HTTP methods it supports for a specific resource, along with any extra options or features available. This method is helpful for figuring out what actions can be taken on a resource without actually changing the resource.
It helps developers understand what they can do with the server, which makes it easier to create and design web applications. Very helpful because using the OPTIONS
method allows you to gather vital information about the API's capabilities without triggering any actual actions.
No rate limits, no authorization, nothing. It provides insights into how you can interact with the API properly, which methods you can use, and what kind of authentication you need. This is especially helpful for developers aiming to build robust and well-informed applications that interact seamlessly with APIs.
Let's try to demo this by sending an OPTIONS
request to stripe's endpoint and see the result. I recommend copy pasting the cURL
command into your terminal:
// Make a request to `https://api.stripe.com/v1/charges` with the method `OPTIONS`
$ curl -X OPTIONS https://api.stripe.com/v1/charges \
-H "Origin: https://anywebsite.com" \
-H "Access-Control-Request-Method: POST" \
-H "Access-Control-Request-Headers: authorization,content-type" \
-v
# The backslash (\) is used to break the command into multiple lines, so it's easier to read.
# I trimmed out the request data to make it easier to read, we're only concerned with response.
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
← HTTP/2 200
← server: nginx
← date: Fri, 25 Aug 2023 16:47:14 GMT
← content-length: 0
← access-control-allow-credentials: true
← access-control-allow-headers: authorization,content-type
← access-control-allow-methods: GET, POST, HEAD, OPTIONS, DELETE
← access-control-allow-origin: https://yourwebsite.com
← access-control-expose-headers: Request-Id, Stripe-Manage-Version, X-Stripe-External-Auth-Required, X-Stripe-Privileged-Session-Required
← access-control-max-age: 300
← strict-transport-security: max-age=63072000; includeSubDomains; preload
Let's analyze this:
-
HTTP/2 200
: The response was successful with a status code of 200. Note that Stripe is actually usingHTTP/2
instead ofHTTP/1.1
. These big companies usually upgrade their web servers and APIs to newer protocols likeHTTP/2
in this case. You should do it too!About
HTTP/3
, I don't think there's an actual requirement yet, but many clients don't supportHTTP/3
also migrating a code base fromHTTP/2
toHTTP/3
is complex; might require a lot of work and require modifications to existing server configurations, load balancers, firewalls, etc. -
server: nginx
: The web server being used is "nginx", probaby as a load balancer. We'll discuss about load-balancers in depth later on in this book. -
date: Fri, 25 Aug 2023 16:47:14 GMT
: The server generated the response on this date and time. -
content-length: 0
: The response has no content, with a length of 0 bytes. -
access-control-allow-credentials: true
: The API supports credentials (such as cookies or HTTP authentication) for cross-origin requests. -
access-control-allow-headers: authorization,content-type
: Only the specified headers are allowed for cross-origin requests. -
access-control-allow-methods: GET, POST, HEAD, OPTIONS, DELETE
: The listed HTTP methods are allowed for cross-origin requests.
<<<<<<< HEAD
-
access-control-allow-origin: <https://anywebsite.com
>: Requests from the specified origin (domain) are allowed to make cross-origin requests. In this case, it'shttps://anywebsite.com
access-control-allow-origin: <https://anywebsite.com>
: Requests from the specified origin (domain) are allowed to make cross-origin requests. In this case, it'shttps://anywebsite.com
ba66a08c66c6e342b66658042a8cc338b0f3b886
-
access-control-expose-headers: Request-Id, Stripe-Manage-Version, X-Stripe-External-Auth-Required, X-Stripe-Privileged-Session-Required
: The listed headers can be exposed to the client in the response, which might contain additional information for the client. -
access-control-max-age: 300
: The client can cache the preflight response for the OPTIONS request for up to 300 seconds (5 minutes). -
strict-transport-security: max-age=63072000; includeSubDomains; preload
: The server enforces strict transport security, with themax-age
value specifying how long the client should consider the server secure,includeSubDomains
indicating that subdomains should also use HTTPS, andpreload
indicating that the domain is included in the browser's preload list for HSTS (HTTP Strict Transport Security).
Notice how easy was it to know that what methods, headers, cors settings and other stuff does the /v1/charges
stripe endpoint supports? We didn't even need to visit the stripe documentation for this.
-
CONNECT
: method's primary purpose is to facilitate the creation of a tunnel between the client and the destination server, often involving one or more proxies, with a two-way communication channel. This tunnel can then be secured using TLS (Transport Layer Security), allowing encrypted communication between the client and the destination server. Too much theory, let's take a look is it useful:One common use case for the
CONNECT
method is accessing websites that useHTTPS
. When a client wants to access an HTTPS website through a proxy, it sends a CONNECT request to the proxy server. The proxy then establishes a tunnel to the destination server, allowing the client to communicate directly with the server over a secure connection.Note: We'll talk about
HTTPS
in a later chapter in the course. For now you can think of it as a secure version of the HTTP protocol.A proxy like a middle person between you and a website or online service. When you want to visit a website, instead of connecting directly, your request goes to the proxy first. The proxy then sends your request to the website and gets the response back. It's like the proxy is fetching the web page on your behalf.
We covered the first two lines of the cURL
output in this chapter. In the next one, we'll take a look at user agents.