Keyword analysis is the practice of studying query formats and cross-referencing them to a pattern of word relevance within the titles, link anchors, and content of website pages. It is a major tool of search engine optimization. Expertise in utilising and interpreting a very broad range of data types is essential, as well as well-founded linguistic skills in the target languages.
Keyword targeting is the application of keyword analysis to specific pages. It aids search engines to understand the theme of the page, and which search queries to match it to.
The first step in keyword analysis is the creation a list of terms and phrases most likely to be used by searchers potentially interested in the content of the page. Synonyms and strongly associated words that create a context for the keywords then join the list. The more and variant the people who can be brought in for this task, the better. People who are not involved in the site's core business, or in SEO, are particularly useful, since SEO professionals tend to get a little biased in their lexicographic range.
However, the next step, adding these keywords to a page, does not entail simply filling every available title and tag on the page with a juicy keyword or two. If the page does not remain interesting and relevant to a human user, the search engine is unlikely to be impressed either.
The elements of a page do not all have the same value in terms of relevancy ranking. In order of descending importance:
Simply stuffing these elements with keywords is not enough to fool the search engines. The data they build their indexes from comes from document analysis, an advanced technique which inspects the relationship between the various parts of content.
Once a keyword list has been decided, the content and its structure is mapped out to ensure an equitable and targeted distribution, where ideally each page has as small a number of target keywords as possible. This is to avoid cannibalization, a form of self-competition, in which keywords are targeted to multiple pages of the same domain, causing loss of authority on the individual pages.
Let us now examine these keyword placement opportunities in more detail:
The most powerful place to put keywords is the page title, located in the
head section of an HTML page.
The title is the only piece of
meta information which influences the page's ranking and relevancy. The title not only appears in the navigation tab of a browser for that page, but appears in the SERP listing as well, and is a strong guide to the searcher regarding the core theme of the page.
Only the first 15-20 characters are guaranteed to appear in the tab, so make sure the first words are good and relevant keywords. Do not use the company name or other generic description at the start of the title. It is common practice to use the stovepipe (|) to separate specific keywords from generic site names. 65-70 characters is all that Google will display in the SERP from the title.
The description tag in the head section has no influence on search engine ranking. However, the text appears in the SERP, and the description makes a very effective advertisement for the searcher.
The Meta description should be shorter than 160 characters, to stay within the maximum that Google will display. The text should contain the keywords which best describe the site or pages. If a meta tag description is not provided, the search engine will create a description from the text on the page, highlighting the search words used. In some cases this can be preferential to a forced generic description tag, and better makes use of the 'long tail' keywords.
These are HTML tags <h1>, <h2>, etc., which are used in conjunction with CSS stylesheets to standardise headings on all pages of a website. The actual size of font and other styling has no influence on the value of the heading tag for SEO ranking.
The words which appear between heading tags have greater SEO weight than text in other elements, such as <p>. h1 has greater weight than h2, etc. However, semantic analysis run by search engine algorithms requires the headings and the text that is in its vicinity to match in theme and relevance.
Overuse of h1 tags, on the other hand, will create an incomprehensible information architecture. Besides loss of utility, it will also lead to cannibalization of keywords, making the page appear not to have any strong theme. Heading tags should be used to help clarify the relative weighting and importance of sections of the body text, so that the core theme is apparent to both the search engine and the human user.
A common misinterpretation of SEO practice leads to the loading of text with unnatural repetitions of targeted keywords. This leads to poor quality information, and loss of interest, and therefore link opportunities, from the reader. Ultimately, the quality of user experience is the metric which carries the most weight. Keywords will naturally appear occasionally in a text on a specific subject, so there is no advantage in over-doing the 'optimization' by unnatural implantations which reduce the quality and readability of the text.
Document analysis techniques and machine learning are now so advanced that phrase associations, morphemes and synonyms can be recognised and carry as much weight as targeted keywords, if not more, as they provide the all-essential semantic association the search engines are seeking to verify the relevance of content.
Images and other AV elements, like video, audio and Flash content, are where the greatest difference between the human visitor and the spider occurs. These elements cannot be seen by a spider, which consequently relies on the information supplied with them.
The name of the image, the title, the alt, and the caption, can all supply keyword targeting opportunities. They should all be an honest description of the content. Misleading tag content is known as 'cloaking', through which the true nature of an image or AV can be hidden from the search engine. Make sure the caption has a keyword or two, instead of 'Check this out..' or something equally uninformative.
Another addition to information retrieval (IR) in the online world is semantic connectivity. Words and phrases can have various degrees of association, and this is taken into account in a search algorithm by reference to enormous databases. To aid search engines refine their databases of terms, Fuzzy Logic is used to adapt them to human behaviour. This is quite advanced as a subject, but SEO practitioners need only know that it exists, and is 'going on' somewhere in that 0.45s it takes to return the SERPs.
Association in its basic form includes synonyms and obviously related words. For a search for 'alpine resort', keywords such as 'Swiss resort' will rank almost as equally. But also, a plethora of words like 'chalet', 'Gstaad', 'ski-lift', etc., will also trigger an association relationship, even if 'alpine resort' does not appear on the page.
In order to determine what a page is about, search engines carry out an analysis of the words and groups of words it finds. The relationships between these words and phrases are then used to draw up what it hopes is an accurate map of the page.
How these words and phrases form patterns of association is known as a semantic map. This is a very advanced science. The better the semantic map matches the perceived query intent, the higher a page will be ranked.
Content © Renewable.Media. All rights reserved. Created : October 21, 2014
The most recent article is:
View this item in the topic:
and many more articles in the subject:
'Universe' on ScienceLibrary.info covers astronomy, cosmology, and space exploration. Learn Science with ScienceLibrary.info.
1916 - 2004
Maurice Wilkins, 1916 - 2004, molecular biologist, was 'the third man of the double helix', as his biography title declares. Born in New Zealand, but did most of his professional work in England, Wilkins shared the Nobel Prize with Crick and Watson.
You cannot do only one thing.
Website © renewable-media.com | Designed by: Andrew Bone