Basic Internet Knowledge – Internet 101

Useful informations for beginners, about how the workings of the internet, including topics such as IP addresses, webservers and more.

Before delving into the world of web development, I thought it’d make sense to review what makes the "internets" tick and ask the question: what are the necessary foundations that drive this technology?

Many people jump into coding and web development directly by writing code. This means that they may lack some basic knowledge about what happens beyond their home internet setup and lack the knowledge of how the Internet works.

Because the Internet is the fundamental of web pages, it is necessary (or highly recommended), to have some basic knowledge about the Internet itself, and web pages in general. Without the root of understanding how the Internet works, it is much more complicated to create something on the web.

For instance, you can’t be a restaurant manager without being able to cook something useful or at least judge what a tasty dish tastes like. Well technically speaking, you could be that, but that would be a mess, because how would you guide the kitchen staff if you don’t know how it works.

OK, before any of us gets hungry, let’s drop the kitchen and chef analogy and start talking about tech. This article piece is not going to be about the history of the Internet, but rather about its functionality.

Webservers

Webservers form a fundamental part of the Internet. Through HTTP and HTTP methods, you can reach web pages served by these servers. These servers do not need to be tangible (i.e. bare-metal servers), but they could also be virtualised; however, the scope of this article is not on virtualisation nor a deep-dive into webservers.

A webserver is a piece of software that runs on a server. The basic function of the webserver is to accept HTTP requests from the clients and send a response back. (This happens typically when the browser sends an HTTP GET statement.)

There are different types of web servers available out there – some of the "famous" ones are Apache (used mostly with PHP, as part of the so-called LAMP stack), IIS from Microsoft and Nginx which is used mostly with modern, Node.js based applications. Speaking about Node.js – and not wanting to jump ahead of myself – there are certain web servers such as Express that are purely made for a Node.js environment.

The physical, geographical distance between the (web)server and the requesting client increases the time a client needs to wait until a website is shown. This round-trip time could be reduced with a CDN (Content Distribution Network) edge server. This server could cache the requested data close to the client’s location so that the client could reach the information much faster. These edge servers could connect separate networks, making it easier to reduce the load time.

A few years ago a common practice started to evolve – static assets such as CSS and JavaScript were stored in CDNs for faster access and caching. Many such CDN servers exist even today, such as cdnjs.com.

To return to our food-related analogies, you can think about the edge server as ordering a pizza from a big pizza chain: your request will be delivered first to the main restaurant, where the request is distributed to another restaurant near to you, who will eventually deliver the pizza to you as fast as possible.

Practically speaking if there’s an Apache server, we can think of that as an "origin server", however, if the content is served from a CDN server, we can refer to that as an "edge server".

Fun fact, the latest development stack out there referred to as the "JAMstack" is completely "serverless" because the website content is served up entirely from a CDN server. It’s a rather exciting movement that’s sweeping through the developer community like a storm.

An edge server stores static assets of a webpage, like CSS and JavaScript and could send it to the client without needing the origin server.

IP addresses

The Internet Protocol, (or just IP), indicates an address of a device that is connected to the Internet. IP addresses are like phone numbers even at a glance, for example: "18.130.18.214".

IP addresses are numbers divided with four dots. The first group of numbers are the "highest level", and from left, to the right, we go a level below that.

The first group of numbers refer to the service provider, then the client and then devices followed by the client’s devices, that identify each other with these unique IP addresses to communicate and send/receive data.

There are 2 types of IP addresses: static and dynamic. The dynamic IP addresses are temporary, and they get assigned when a computer connects to the Internet – but each time when this happens, the IP address will be different.

Static IP addresses are, well, you know, static, which means that once they are set, they don’t change.

The IP addresses that we have been talking about so far are all IPv4. However, IPv4 addresses are nearly all distributed, meaning that we are running out of free and available addresses. (This is because in the ’70s people thought that having 2^32 number of addresses is going to be sufficient.

IPv4 uses 32-bit numbers, and with 32 bits, the maximum number of IP addresses is 232—or 4,294,967,296 – which is really not enough number of IP available addresses. As a solution to this particular problem, there exists IPv6, which uses 128 bits to symbolise addresses (e.g: fe80:0000:0000:0000:0202:b3ff:fe1e:8329), instead of 32 bits. With the longer range, there are 340 sextillion variations for unique IP addresses. Sounds sufficient, right? Because IPv6 uses the 128-bit format, the hexadecimal identification is a much more obvious solution for displaying an IP address as opposed to just decimals. Besides the extended variations, there are other features supported by IPv6, like automatic configuration, identification, and encryption of data traffic. All in all, IPv6 will bring not only more available IP addresses but some additional refreshing changes to networking.

localhost, 127.0.0.1, ::1

The IP address 127.0.0.1 (or "::1" for IPv6) is a reference to your computer’s webserver. This is independent of the computer’s OS; therefore, it works in the same way on Linux, Windows and Mac. The first part of this particular IP (127) refers to a loopback address. This means that you can run a network service (on your computer) without having the physical network interface.

If you try to open this address or domain in your browser, it will not take you to a website, but it will take you to your computer (Providing the fact that you run a webserver).

You are probably wondering, why on earth would I want to run this "loopback thing" on my computer? Well, there are several purposes for localhost. The most important one for web developers is to test code. With this loopback address, your computer becomes a webserver, where you can upload your code and see it in action, effectively serving your website locally.

HTTP

If you have visited a website before, it is likely that you have entered "http:// ". HTTP (Hypertext Transfer Protocol) is the fundamental communication protocol of the World Wide Web. It follows the basic client/server model.

TCP (Transmission Control Protocol) is also important to mention because it defines how information is packaged up, and how is it sent through from the server to the client. HTTP doesn’t control how the information is packaged or sent. Simply put, HTTP is responsible for getting the data, but TCP is responsible for how to get the data.

There are some status codes in HTTP. Some of these are familiar for the everyday user, like "404 Page not found". The values could be divided into 5 groups:

  • 1xx are information level messages such as: "101 Switching {rotocols."
  • 2xx is for success messages such as: "200 OK".
  • 3xx is for redirections such as: "302 Moved Permanently."
  • 4xx is for client-side errors such as: "403 Forbidden."
  • 5xx is for server-side errors such as "503 Service Unavailable."

Headers are another necessary feature of HTTP. Using headers, we can send tiny pieces of information as part of the request and responses. With the specification of the header, you could choose, in what format you’d like to get the information from the server (e.g. text format).

Because there are headers involved both in the request and response, the servers could use headers such as "Upgrade" to asks the client to upgrade to another protocol.

HTTP versions

As I mentioned at the beginning, I don’t want to write about the history of the Internet, but incidentally, there’s an important thing that we need to take a look at real quick: there are 4 versions of HTTP: 0.9, 1.0, 1.1 and 2.0. Currently, HTTP1.1 Is being used predominantly; however, there are many services out there already leveraging. HTTP 2.0. We will not get into the details of how HTTP 2.0 is different from HTTP. 1.1 this time, but suffice to say that HTTP 2.0 brings many benefits to the end-users, including speed and performance.

Anatomy of a URL

From the software side of the Internet, you need a browser to view websites. If you type a homepage address into your browser, you type the address of the webpage, or more accurately the URL (Uniform Resource Locator).

The URL is built up from different pieces, and each of them has a dedicated purpose. Let’s analyse a simple URL: https://courses.fullstacktraining.com/courses/introduction-to-typescript

The first part https:// is the scheme of the URL. As I wrote before the HTTP part defines the protocol between the server and the client. The "s" part refers to "secured": a secure layer enables generic data encryption.

There are other available protocols like, ftp:// for transferring files between the server and client, or mailto:// which can open the users’ default email programme.

The next section is the subdomain (the "courses" part in the above example). It refers to the courses section of this particular website.

Subdomains can divide a website into logical components. Subdomains are enabled by DNS entries which can be set via the domain provider. An example could look like this: courses IN A 192.168.2.10.

Also, note that the most famous subdomain is "www". The reason for this is because back in the day the Internet was used for a variety of things – like telnet and SMTP, each having a dedicated subdomain. In the 90s, when the Internet became popular, and organisations started adding their websites to the Internet, they have denoted these sites by using the www subdomain.

Related to the subdomain, we also have second-level domains. Although this is referred to as "second-level", it is the "heart" of the URL, and it is the "name" of the website. Through the domain name, you could find the exact website you are looking for, without looking through a bunch of FullStackTraining websites because of the uniqueness of the domain name. Essentially each unique entry has an associated unique domain.

A top-level domain marks the "end" of a canonical URL. It defines the class of the website: .com refers to commercial websites, but there are also several others, like .gov for government websites and .edu for colleges and universities. Some countries also have specific domains such as .es for Spain, .co.uk for the UK to mention a few.

The next section is the path ("courses/introduction-to-typescript"). This path could be a physical directory or just a virtual mapping. Modern web servers such as Apache or Nginx can translate such paths, via virtual mappings to actual paths. For example, "/courses/hello" could refer to "/courses/hello.php". The reason for having these paths is for optimising Search Engine Optimisation (SEO).

Note that by default a web server looks for a so-called "index" file which is the file that it serves. Effectively https://fullstacktraining.com opens up https://fullstacktraining.com/index.html, but there’s no need for us to specify this since web servers do this for us automatically.

There are some other notable sections in the URL, like parameters, which are key/value pairs that can denote specific actions that a website needs to make based on some user action.

These can be found in the URL and are denoted by the "?" and the "=" symbols. For example, if we would like to send the user directly to a given course, we should use https://fullstacktraning.com?productid=1234 (in case the course are marked with the ID 1234). With this feature, the host could track the clicks on the site, or manage the ads, as where (other websites) the visitors come from as well as load up data from a database after a query. However, notice that "productid=1234" is not really SEO friendly, that’s why having something like "/product/name-of-product-1234" is a much better strategy for delivering URLs.

Furthermore, URLs sometimes also contain # symbols. The string after the # refers to an exact part of the site. It’s not like mapping for the website, but more like a marking within a given page. The site will be displayed in your browser, and with the # we can jump to an anchor. Think of these as the "Where’s Waldo?" game: it enables us to find Waldo in the crowd and open the book on the exact page, where Waldo hides (if it has been marked by someone before).

Note that some modern, frontend JavaScript frameworks (such as Angular) also leverage # as they form a crucial part in designing SPAs (Single Page Applications).

Cookies

Cookies store information on your computer after visiting a website, like your location and language amongst many others. Things like this make the web page able to send personalised information to the user.

Cookies can store many pieces of information like names, email addresses, phone numbers, or other information about someone browsing a given site. As I’m sure you can imagine this poses a huge security concern, though cookies store only the information what the users consent to, furthermore cookies do not have access to information on a computer.

There are different type of cookies:

  • Session cookies: The purpose of these cookies is to aid with authentication, and they are a must when it comes to applying authentication and authorisation for a site.
  • Persistent cookies: These cookies are for handling things like pop-up windows, where the web page asks if you want them to remember your password, or asks if you want the web page to remember your billing address on eBay. Don’t let the name fool you – you can still remove these cookies but generally speaking their time to expiry is much longer when it comes to comparing them with session cookies.
  • Third-party cookies: Let me give you a classic example: you search for hotels on the web, and suddenly you start to see adverts displaying various hotels in the region where you searched initially. In practice, there’s a lot more going on behind the scenes, but they all bog down to cookies. Generally speaking, third-party cookies help with advertising and analytic services. With the introduction of GDPR in the European Union cookie consent has much better control and transparency towards the end-user.

Conclusion

When you enter https://fullstacktraning.com address in your browser, you send a request to a web server to get data about the webpage you asked for. The browser turns the URL into IP address to find the server where the data is stored.

This request transfers from your unique IP to a router through your ISP (not all the computers are connected directly to the Internet) which sends the request through the Internet, where the request searches for the IP address of the server where the data of the web page is stored (or in some cases the request is sent to the edge servers).

The server receives it and processes the HTTP GET request. After this, the server disassembles the information you asked for and sends, in many small packages, through different routes, the data back to your computer, where it is displayed on your computers screen.

When I want to send an email to someone, this process is a bit longer and a bit more convoluted: My computer is going to connect via an ISP to my email provider’s server (e.g. Gmail), and then the Gmail server looks for the consignee’s email provider server (e.g. Microsoft), and the consignee is connecting to their email providers server, where they get the emails.

These "information packages" running through the Internet, with the support of routers. The routers prevent the information to arrive at inadequate computers. Every time a data package is passing a router, it gets a "layer", so the routers could identify where to send the data.

All in all, I hope you enjoyed this post – it took me quite a while to understand all of the above, but now I feel better geared up for what’s coming next!


Print Share Comment Cite Upload Translate
APA
Balazs Kemenesi | Sciencx (2024-03-29T10:52:29+00:00) » Basic Internet Knowledge – Internet 101. Retrieved from https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/.
MLA
" » Basic Internet Knowledge – Internet 101." Balazs Kemenesi | Sciencx - Tuesday August 27, 2019, https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/
HARVARD
Balazs Kemenesi | Sciencx Tuesday August 27, 2019 » Basic Internet Knowledge – Internet 101., viewed 2024-03-29T10:52:29+00:00,<https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/>
VANCOUVER
Balazs Kemenesi | Sciencx - » Basic Internet Knowledge – Internet 101. [Internet]. [Accessed 2024-03-29T10:52:29+00:00]. Available from: https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/
CHICAGO
" » Basic Internet Knowledge – Internet 101." Balazs Kemenesi | Sciencx - Accessed 2024-03-29T10:52:29+00:00. https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/
IEEE
" » Basic Internet Knowledge – Internet 101." Balazs Kemenesi | Sciencx [Online]. Available: https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/. [Accessed: 2024-03-29T10:52:29+00:00]
rf:citation
» Basic Internet Knowledge – Internet 101 | Balazs Kemenesi | Sciencx | https://www.scien.cx/2019/08/27/basic-internet-knowledge-internet-101/ | 2024-03-29T10:52:29+00:00
https://github.com/addpipe/simple-recorderjs-demo