A whole industry has built up around the discipline of ‘search engine optimisation’. Some very clever people are spending a lot of time (and quite a lot of their clients’ money) to improve results in web searches.
It may well be worth looking at what they can do for you ... but first, think about what you can do for yourself – with the help of our Ten Top Tips for improving your search engine appeal.
Search engine results aren’t the best way to get visitors to come to your website – telling them the web address is by far the best method, followed closely by providing them with a clickable link in a newsletter or on another site. For most sites, especially those in a highly competitive market like mobile phones, searches on Google and the like will deliver fewer than 10% of useful visits … and it’s likely to be nearer to 2%. But you still can’t ignore the search engines: even a small number of hits can result in a big lot of business. So you’d better try to ensure that you appear when anyone searches for relevant keywords or phrases; and ideally you should appear early on in the list of results, because most people won’t bother scrolling down or turning to the next page. Search engines list the search results by what they define as ‘relevance’, and there’s quite a lot that you can do to improve your own relevance – making it more likely that your site gets a good position when the search results appear.
1 Skip the Flash
Be wary of any and all graphics: yes, it might look good – but make sure your site is not designed in a way that the code itself prevents a search engine from discovering all the good stuff that makes you relevant. In generally this means you should avoid pages that are entirely graphic, including Flash-only pages – although they will check out met tags like keyword lists, search engines do like to see at least some text.
2 Use your title
Your page title – the bit in the <title> tag – should always reflect the main topic of the page, and that probably means the single most important keyword or phrase that you want associated with it. So “Beccles and Bungay Fones Intergalactic Ltd” might not be as useful as “Discount mobiles phones in Suffolk”.
3 Get your keywords in early
Search engines like web pages where keywords appear early or ‘high’ on the page – in the first couple of paragraphs in particular. The first paragraph of text on the page should definitely restate the principal keywords. “Looking for a discount mobile phone in Suffolk? We have more discount mobile phones than any other retailer in Suffolk …”
4 Avoid unnecessary tables
They can ‘push’ your text down the page, as far as the search engine is concerned, and the further from the top your keywords are the less relevant they will seem. This is because tables break apart when search engines read them.
Take a simple two-column page, for instance; the human eye sees the content in the two columns as more or less equivalent, but the search engine reads all of column one before moving on to column two. And if column one has something like site navigational links or pictures, the meaty stuff in column two will be regarded as less important.
And don’t nest too many tables within tables – search engines get confused by this. A maximum of three nested tables is a good rule of thumb, but in any case it’s better to use XHTML with DIV tags and CSS to define the position of web page elements.
5 Take care with scripts
Search engines like to see a high ratio of content to code on pages: it improves the clarity of your message, as far as the search engine is concerned. So put JavaScripts in external files and link to them from the HTML.
This is good policy anyhow, since it makes site maintenance a lot easier, but large chunks of script can have the same effect as tables – the search engine reads all the JavaScript first and regards the normal text in HTML as less important because it appear ‘lower’ on the page.
If you can’t link too external scripts, place your scripts as low down the page as possible.
6 Use header tags
For some reason, these days it’s not very fashionable to write web pages that utilise header tags – H1, H2 and the like. Yet search engines just love them; they’ll give added weight to content that appears in header tags.
Google in particular gives a lot of importance to the information within H1 and H2 tags.. (But search engines will mark down a page hat has more than one H1 tag on it – you can use as many H2s as you like, but more than one H1 is regarded as an attempt to spam the search engine.)
It’s easy enough to redefine tag formats in CSS style sheets if you don’t like the defaults.
Incidentally, there’s some (largely anecdotal) evidence that many search engines will also give added weight to text marked with the bold or strong tags.
7 Avoid frames
Frames might give some extra oomph to a page, but they can really mess up your chances of a good result on searches. Search engines will read a frameset as a single page with very little content (which is what it is, after all) and so won’t give it the weight it deserves. Your home page in particular shouldn’t use frames.
8 Avoid dynamic pages
Most search engines will not include dynamic URLs in their results pages, meaning pages that are typically produced on the fly by database-driven or script-based sites.
9 Avoid image maps
Search engine crawlers frequently get stuck in image maps and often can’t produce an accurate inventory of your site. Stick with standard HTML navigation schemes if at all possible.
10 Pick the best keywords
Which seems obvious, but as well as requiring some research – what search terms are the punters looking for? – it might also demand a degree of lateral thinking. For instance, your keywords should always be at least two or more words long; too many sites will be relevant for a single word, such as “phones” and it’s not worth trying to beat the odds.
And Finally...
Get good links. All the major search engines use link analysis as part of the relevance calculation – the more good links your site has, the more likely it is to feature in other relevant search results. But the search engines are clever enough to see that a huge number of links is not good proof of relevance; they will check that your links are indeed related to the topics you want to be found for. More about this next time.
Search engine results aren’t the best way to get visitors to come to your website – telling them the web address is by far the best method, followed closely by providing them with a clickable link in a newsletter or on another site. For most sites, especially those in a highly competitive market like mobile phones, searches on Google and the like will deliver fewer than 10% of useful visits … and it’s likely to be nearer to 2%. But you still can’t ignore the search engines: even a small number of hits can result in a big lot of business. So you’d better try to ensure that you appear when anyone searches for relevant keywords or phrases; and ideally you should appear early on in the list of results, because most people won’t bother scrolling down or turning to the next page. Search engines list the search results by what they define as ‘relevance’, and there’s quite a lot that you can do to improve your own relevance – making it more likely that your site gets a good position when the search results appear.
1 Skip the Flash
Be wary of any and all graphics: yes, it might look good – but make sure your site is not designed in a way that the code itself prevents a search engine from discovering all the good stuff that makes you relevant. In generally this means you should avoid pages that are entirely graphic, including Flash-only pages – although they will check out met tags like keyword lists, search engines do like to see at least some text.
2 Use your title
Your page title – the bit in the <title> tag – should always reflect the main topic of the page, and that probably means the single most important keyword or phrase that you want associated with it. So “Beccles and Bungay Fones Intergalactic Ltd” might not be as useful as “Discount mobiles phones in Suffolk”.
3 Get your keywords in early
Search engines like web pages where keywords appear early or ‘high’ on the page – in the first couple of paragraphs in particular. The first paragraph of text on the page should definitely restate the principal keywords. “Looking for a discount mobile phone in Suffolk? We have more discount mobile phones than any other retailer in Suffolk …”
4 Avoid unnecessary tables
They can ‘push’ your text down the page, as far as the search engine is concerned, and the further from the top your keywords are the less relevant they will seem. This is because tables break apart when search engines read them.
Take a simple two-column page, for instance; the human eye sees the content in the two columns as more or less equivalent, but the search engine reads all of column one before moving on to column two. And if column one has something like site navigational links or pictures, the meaty stuff in column two will be regarded as less important.
And don’t nest too many tables within tables – search engines get confused by this. A maximum of three nested tables is a good rule of thumb, but in any case it’s better to use XHTML with DIV tags and CSS to define the position of web page elements.
5 Take care with scripts
Search engines like to see a high ratio of content to code on pages: it improves the clarity of your message, as far as the search engine is concerned. So put JavaScripts in external files and link to them from the HTML.
This is good policy anyhow, since it makes site maintenance a lot easier, but large chunks of script can have the same effect as tables – the search engine reads all the JavaScript first and regards the normal text in HTML as less important because it appear ‘lower’ on the page.
If you can’t link too external scripts, place your scripts as low down the page as possible.
6 Use header tags
For some reason, these days it’s not very fashionable to write web pages that utilise header tags – H1, H2 and the like. Yet search engines just love them; they’ll give added weight to content that appears in header tags.
Google in particular gives a lot of importance to the information within H1 and H2 tags.. (But search engines will mark down a page hat has more than one H1 tag on it – you can use as many H2s as you like, but more than one H1 is regarded as an attempt to spam the search engine.)
It’s easy enough to redefine tag formats in CSS style sheets if you don’t like the defaults.
Incidentally, there’s some (largely anecdotal) evidence that many search engines will also give added weight to text marked with the bold or strong tags.
7 Avoid frames
Frames might give some extra oomph to a page, but they can really mess up your chances of a good result on searches. Search engines will read a frameset as a single page with very little content (which is what it is, after all) and so won’t give it the weight it deserves. Your home page in particular shouldn’t use frames.
8 Avoid dynamic pages
Most search engines will not include dynamic URLs in their results pages, meaning pages that are typically produced on the fly by database-driven or script-based sites.
9 Avoid image maps
Search engine crawlers frequently get stuck in image maps and often can’t produce an accurate inventory of your site. Stick with standard HTML navigation schemes if at all possible.
10 Pick the best keywords
Which seems obvious, but as well as requiring some research – what search terms are the punters looking for? – it might also demand a degree of lateral thinking. For instance, your keywords should always be at least two or more words long; too many sites will be relevant for a single word, such as “phones” and it’s not worth trying to beat the odds.
And Finally...
Get good links. All the major search engines use link analysis as part of the relevance calculation – the more good links your site has, the more likely it is to feature in other relevant search results. But the search engines are clever enough to see that a huge number of links is not good proof of relevance; they will check that your links are indeed related to the topics you want to be found for. More about this next time.
TOP TIPS
Search engines work by using ‘spiders’ or ‘crawlers’ – essentially programs or automated script which browse the web automatically in a systematic manner, looking for web pages to analyse for their content.
The spider visits a web page, reads it, and then follows links to other pages within the site. It will then revisit the site regularly to look for changes.
Everything the spider finds goes into the search engine’s index or catalogue, a huge database containing a description of every page that the spider finds. If a web page changes, this catalogue is updated with new information.
If there are specific pages that you don’t want included in a search engine analysis, your website directory should include a text file called robots.txt that designates which files and directories and files are off limits to the search engine.
Sometimes it can take a while for new pages or changes that the spider finds to be added to the index. Thus, a web page may have been “spidered” but not yet “indexed”. Until it is indexed – added to the catalogue – it is not available to those searching with the search engine.
The search engine itself is the program that sifts through the millions of pages recorded in the catalogue to find matches to a search and rank those results in order of relevance. To determine relevancy, they follow a set of rules known as an algorithm; and exactly how a particular algorithm works is a closely-guarded secret. It is possible to figure out the major factors, just not the weight given to them. So issues like keyword location on the page are clearly important, as is the frequency with which the keyword appears … subject to the search engine’s (unpublicised) rules that mark down any attempt at spamming (repeating the keyword too frequently in any attempt to persuade the search engine of the page’s relevance).
To find out what a search engine spider sees on your site, run a simulator like the one at http://www.webconfs.com/search-engine-spider-simulator.php. This will show you what text and what keywords the spider will use in analysing the page.
The spider visits a web page, reads it, and then follows links to other pages within the site. It will then revisit the site regularly to look for changes.
Everything the spider finds goes into the search engine’s index or catalogue, a huge database containing a description of every page that the spider finds. If a web page changes, this catalogue is updated with new information.
If there are specific pages that you don’t want included in a search engine analysis, your website directory should include a text file called robots.txt that designates which files and directories and files are off limits to the search engine.
Sometimes it can take a while for new pages or changes that the spider finds to be added to the index. Thus, a web page may have been “spidered” but not yet “indexed”. Until it is indexed – added to the catalogue – it is not available to those searching with the search engine.
The search engine itself is the program that sifts through the millions of pages recorded in the catalogue to find matches to a search and rank those results in order of relevance. To determine relevancy, they follow a set of rules known as an algorithm; and exactly how a particular algorithm works is a closely-guarded secret. It is possible to figure out the major factors, just not the weight given to them. So issues like keyword location on the page are clearly important, as is the frequency with which the keyword appears … subject to the search engine’s (unpublicised) rules that mark down any attempt at spamming (repeating the keyword too frequently in any attempt to persuade the search engine of the page’s relevance).
To find out what a search engine spider sees on your site, run a simulator like the one at http://www.webconfs.com/search-engine-spider-simulator.php. This will show you what text and what keywords the spider will use in analysing the page.