Flavijus Piliponis â stock.ado
Hyperlinks age over time. The longer a hyperlink exists on the internet, the more it deteriorates and eventually becomes inactive and inaccessible. This means that when a user tries to access a certain link, it no longer points to the intended file, webpage or server.
Rotten links can create various challenges for users trying to navigate the vast expanse of the internet in search of valid and valuable information. Moreover, link rot can negatively affect website rankings and search engine optimization (SEO) results.
What causes links to rot?
A research study published in 2016 showed that out of a group of 360 individual URLs sampled in 1995, only two remained active after a span of 20 years, while the rest had become completely inaccessible to future generations.
The phenomenon of link rot usually happens due to the buildup of broken and old links that accumulate when webpages are moved, redirected or reorganized. In most cases, users will encounter a 404 error message when they visit a rotten link on the internet.
Some common reasons behind link rot include the following:
- Website updates and content migration. Websites are occasionally redesigned or updated, which can result in the removal or relocation of pages to new URLs. It's also common for websites and social media content to be migrated to other platforms from time to time. If the original links are not updated accordingly, they will become broken.
- Domain expiration. Once a domain name registration expires, any associated links will become nonfunctional and result in link rot.
- Server issues. Technical issues, website crashes and server failures can also lead to rotten links.
- Content removal. Sometimes content creators or website owners might decide to remove certain pages or resources, rendering the corresponding links nonfunctional.
- Human error. Human intervention can be a major cause behind link rot. For example, typos in URL input or HTML coding can occasionally result in broken links if the intended destination does not match the link as written.
Issues posed by link rot
Link rot presents challenges for website owners, publishers, content creators and end users alike. Some common challenges posed by link rot include the following:
- Reduced credibility. A website's credibility can suffer as a result of broken and dead links. Numerous broken links on a website can create the impression that the content is outdated or unreliable or that the site is not being properly updated.
- Poor user experience. Rotten and broken web links can provide a poor user experience. Users trying to access specific information or resources on the internet might not get the desired results and can become frustrated.
- Negative impact on SEO. When ranking websites, search engines such as Google take broken links into account. Therefore, websites with a lot of broken links could notice a drop in organic traffic and search engine rankings.
- Loss of traffic. A website with too many broken links can suffer from traffic loss, as frustrated and discouraged users will probably try to avoid visiting the website in the future.
- Legal implications. In legal and compliance environments, broken links to crucial legal files, laws or terms of service can generate confusion and possibly legal issues. In the medical and healthcare industries, for example, inappropriate linking to medical guidelines or research articles might have disastrous consequences.
- Research and citation problems. For academics and researchers who use online sources in their research, link rot can be an issue. For example, it can become difficult for users to obtain and examine the sources they mention if the links in their citations stop working.
How to fix link rot
While the phenomenon of link rot can't be eradicated completely, there are ways to minimize it. Some common ways to combat link rot include the following:
- Identifying broken links on a website. Website owners should regularly check for broken links on their websites. This can be achieved either manually or through the use of automated tools such as Google Search Console and Google Analytics that enable the inspection of broken links on a website and all the 404 errors they generate. In addition, broken-link checkers such as Xenu's Link Sleuth and plugins for content management systems provide efficient methods for locating and repairing broken links.
- Fixing internal links. Once broken links are identified, they should be fixed. For example, if a broken link is due to a typo, website owners can simply correct the spelling in the URL. However, the majority of broken links are caused by moved or missing material, in which case a 301 redirect -- an HTTP signal code that directs web browsers to swiftly switch between site destinations -- should be used to fix the broken links.
- Using archived links. Website owners should consider getting a backup link from a content archiving service, such as Harvard Law School's Perma.cc or the Internet Archive's Wayback Machine, to ensure the cited source remains accessible through the link in the future.
- Linking to original sources. Content on websites should always link to original sources instead of secondary ones, and precedence should be given to linking to stable websites.
- Using permalinks. Permalinks are a great way to avoid link rot. These are permanent links or URLs that are least susceptible to link rot as they're specifically designed to preserve URLs for years. Most permalinks are easy to remember for both search engines and end users.
- Avoiding deep linking. Deep linking is the use of hyperlinks that act as a path to direct users to a particular section or page of a website. While deep linking is helpful for accessing specific content on a webpage without needing to navigate a website's hierarchy, it can cause problems if the URL structure or content location changes within a website. This can result in unnoticed link rot and rendering the deep links broken and inaccessible.
Is link rot the same as content drift?
Link rot is often confused with content drift. While both issues are related to the maintenance and longevity of online content, they have some distinct characteristics.
The main difference between the two is that with link rot, the user will recognize an error as they will see a "404 Not Found" message. But with content drift, the reader cannot confirm if the retrieved webpage is the same as the original one referenced by the author.
In contrast to link rot, with content drift, the content to which the link or backlink point to gets updated or changed, but the link itself stays valid.
Can Web 3.0 solve the crisis of things disappearing from the internet?
In another study, researchers found that after seven years of being on the internet, only 56.61% of links remained functional, leaving a significant 43.39% of the links to rot during that time frame. This raises the question of whether Web 3.0 could offer a remedy for the link rot phenomenon.
Web 3.0, also referred to as the Semantic Web, is an evolving concept that aims to improve the internet by making data and information more connected, easier for computers to read and smarter. Even though Web 3.0 offers advantages that relate to preserving and accessing data, it might not completely fix the issue of things disappearing from the internet -- and link rot -- all by itself.
However, it can be part of a broader solution alongside other approaches due to its decentralized nature. Since data and content are distributed across multiple nodes instead of being stored on centralized servers, the Web 3.0 approach can potentially reduce the risk of content loss due to single points of failure or website shutdowns.