What is the Dark Web?

 

The world wide web is estimated to consist of over

and is constantly growing....

 

Only a small percentage of these webpages are readily available to the public - social networking sites, online shopping, blogs, news sites - the Internet as we know it. However, there is much more beneath the surface!

 
The Open Web

 

The Open Web is the portion of the world wide web that is indexed by search engines. In other words, the content that can be searched by you using a standard search engine. The webpages that make up the Open Web are your everyday site that end in [.com], [.org], [.co.uk] and so on. These webpages are static, meaning they reside on a server waiting to be retrieved by Google’s web crawlers.

 

Finding information by crawling

The web is like an ever-growing library with billions of books and no central filing system. Google uses software known as web crawlers to discover information from webpages and other publicly available content. This information is organised into a massive database, known as the Search index, to be recalled later when needed for a search query. Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers. Links allow crawlers to reach the many billions of interconnected documents on the web.

 

Organising information by indexing

When crawlers find a webpage, Google’s systems render the content of the page, just as a browser does. Google catalogues the webpage by taking note of descriptive texts and metatags hidden within the webpage’s code and keeps track of it all in the Search index. The Google Search index contains hundreds of billions of webpages and is over 100,000,000 gigabytes in size. It’s like the index in the back of a book — with an entry for every word seen on every web page indexed.

 

The life of a crawler

 

 

The Deep Web

 

The Deep Web is made up of webpages that are accessible via a standard web browser, but cannot be crawled or indexed by standard search engines. Normally this is information residing in a database, behind a paywall or network authentication (such as online banking). This means there are no directions to the webpages, but they are waiting to be found if you have an address. As standard search engine crawlers are unable to find non-indexed pages, advanced search engines are required to access the Deep Web. For example, academic databases and search engines.

The Deep Web is continuously expanding and impossible to measure because the information is hidden behind firewalls, but is estimated to be 400 to 550 times larger than the Open Web. Certain content is completely anonymous and only accessible via special software. This is known as the Dark Web.

 

Advanced search engines

 

 

The Dark Web

 

The Dark Web is an unknown sized portion of the Deep Web that is made up of anonymous websites and hidden services that are not accessible via standard web browsers or crawled by any search engine. Commonly known as Dark Markets, these websites often sell or advertise illegal products and services, such as drugs, weapons and fake documents.

 

The Onion Router (Tor)

Tor is free software that enables anonymous communication and is used to prevent Dark Web users and services from being identified. Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays to conceal a user's location and Internet usage from anyone conducting network surveillance or traffic analysis.

All communication is encapsulated in layers of encryption – much like the layers of an onion. The encrypted data is transmitted through a series of network nodes called onion routers. Each onion router peels away a single layer, uncovering the data's next destination. When the final layer is decrypted, the message arrives at its destination. The sender remains anonymous because each intermediary knows only the location of the immediately preceding and following nodes.

 

Onion routing

Rather than ending in [.com], hidden websites on the Dark Web end in [.onion]. [.onion] is a special-use top level domain suffix designating an anonymous hidden service reachable via the Tor network. The [.onion] system makes both the information provider and the person accessing the information more difficult to trace, whether by one another, by an intermediate network host, or by an outsider.

 

.onion site

 
[.onion] web addresses are often made up of a random collection of letters so the identity of the site is concealed until you enter. This also means if the site is shut down, it can easily pop up again by changing 1 or 2 letters and does not stay in the one location for long. These hidden websites often become market places where illegal products and services are sold to members of the public via cryptocurrencies - more on that later!

 

Darknet Markets



 

To learn more about the Dark Web, sign up to our OSINT course today at https://www.chenegaeurope.com/osint-training/ or register your interest for our Dark Web Investigations course by emailing jmatchett@chenega.com

 

comments powered by Disqus