+91- 9899549331 info@thesharpbrain.com

Just about everyone has used a search engine, and most people are at least broadly familiar with what’s going on behind the scenes. Even so, SEO remains a mystery to most. According to recent surveys and statistics, as many as three businesses in four don’t bother with any sort of SEO, and a major reason for that is a failure to really understand its importance. As a digital marketing agency, we want to rectify that.

To explain how SEO works in 2018, we need to start at the beginning. Let’s begin with a crash course — “A Digital Marketer’s History of SEO.”

Going Back to the Beginning

To give any real practical explanation of SEO, we need to go back to the early 90s, at Stanford. (You’ll find that “a group of students at Stanford” is a phrase that comes up a lot in the tech world.) In February of 1993, a group of students at Stanford created the foundation for what would soon evolve into Excite. Excite was one of the very earliest search engines, built on a system called Architext. Architect, for its part, was innovative, in that it sorted information by keywords which were extracted straight from the text.

By the end of that year, a number of different platforms (JumpStation, RBSE spider, and World Wide Web Worm, and others) had come into use. These “crawled” content with bots (sometimes called spiders or crawlers, these algorithms open, record, categorize, and move on from, content) to generate indexes for search engines to pull from when providing results for a given query.

Six months later, in mid-1994, users could choose from Infoseek, Alta Vista, Lycos, and Yahoo, as well as a few other contenders, and the idea of a “search engine” was now in the relatively common parlance.

Moving Beyond the Keywords

The next major leap forward came in 1997, when Larry Page, also at Stanford, developed what would become PageRank. His revolutionary new algorithm was able to rank different pages in search results in a whole new way. While previous options had tried for keyword frequency, or simply provided something like a library index (an inverted approach, in which you had to know what you were looking for to be able to find it) Page’s system ranked sites by quality.

It did this by measuring not just keyword frequency (for topical relevance) but also the size and scale of a site’s link network. Basically, it worked on this assumption: the better a page’s quality and utility, the more often it would be linked to by other sites. Ergo, a site which was linked to by a larger number of sites (sites which, themselves, boasted a robust link network, and so on) would be proportionally more likely to be useful to a potential user.

Each site, in this schema, was assigned a numerical value, which might loosely be a quantifiable answer to the question: “If I were to start on a random page, and follow random links, how long would it likely take me to find this page? Or, put another way, what is the likelihood of landing on this page, relative to another, by following purely random links?”

Quantified this way, sites with a more robust link network would be more likely to be found, and so would outrank sites with a less robust network. By this system, search results could be ordered in a reliable way which reflected their perceived relative merit. The system worked well for its time, delivering more useful content nearer the top of the SERPs than its competition.

Cutting out the Spam

However, it didn’t take long for spammers to find a quick and easy way to game this system. Since the ranks depended on links and keywords, it was pretty simple to artificially inflate a page’s rank by linking to it from a network of directories, in a process called link farming, and to stuff keywords artificially to make it seems as though a page was much more useful and relevant than it actually was.

This problem lasted in one form or another for quite a while. Indeed, there was a time when SEO got a bad reputation because of these black-hat tactics.

As early as 2003, Google’s algorithm update, Florida, tried to crack down on keyword stuffing, but it had a hard time striking a balance. There wasn’t a good way to tell the difference between keyword stuffing and especially relevant natural use of a keyword solely by measuring its frequency in a given piece of content. Florida solved the problem, but it was a hypercorrection so severe that the overall utility of the search engine plummeted. It delivered only so-so results, with ambivalent relevance, because it was playing it too safe.

As the tech improved to parse more than just frequency, we got a new solution to the spam problem.

Leveraging Machine Learning

With machine learning as a workable prototype, it became possible to train algorithms to parse natural language and to measure keyword frequency in context for the first time. This was the revolutionary development that changed everything.

Now, it was possible to cut spam but still allow strong content, even if the keyword frequency was identical. Two new algorithms were built on this architecture, Panda, and Penguin. Launched in 2011 and 2012, respectively, Panda demoted thin, unhelpful content and gave priority to sites with more quality. Penguin targeted sites with spammy link networks, and penalized them, sometimes un-indexing them altogether.

SEO in 2018

The state of SEO is at once the most complicated, with so many factors and metrics and algorithms in play, and also, in some ways, the most simple it’s ever been.

Google’s goal, like any search engine, has always been to deliver the best possible content. It’s in their best interests to develop the tools to find it reliably. They’re not perfect yet, but those processes are getting better by the day.

From a digital marketing perspective then, and as we’re fond of saying, content is queen. The first step to good SEO is to just produce lots of awesome content.

From there, it’s more about getting your message out. SEO in 2018 is about making your content clear, and easy for crawlers to parse and index. It’s about keeping your information consistent on multiple platforms and building a great link network. That’s right, after more than twenty years, a robust link network is still a major factor in your overall page authority (the quotient which more or less determines where your site will rank).

So with that knowledge under your belt, we hope you’ll have the foundation to develop stronger content and to get it in front of the audience it deserves.

Have a great day!

Andrew McLoughlin

Andrew McLoughlin

Content Marketing Manager at Colibri Digital Marketing
Andrew has been with Colibri Digital Marketing since 2016, serving as Content Marketing Manager. His degrees are from Trent University where he studied ancient cultures from a socio-linguistic perspective. As a writer, his work has been published in a wide variety of digital and print media, and in his spare time, he writes children's stories. He lives in Peterborough, Ontario, with his wife, daughter, and pet rabbits.
Andrew McLoughlin

Latest posts by Andrew McLoughlin (see all)

Share This