YOMEDIA
ADSENSE
Tìm Hiểu về Wordpress - part 31
71
lượt xem 12
download
lượt xem 12
download
Download
Vui lòng tải xuống để xem tài liệu đầy đủ
Tham khảo tài liệu 'tìm hiểu về wordpress - part 31', công nghệ thông tin, đồ họa - thiết kế - flash phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả
AMBIENT/
Chủ đề:
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Tìm Hiểu về Wordpress - part 31
- 8 Search Engine Optimization 8.1.1 SEO Strengths and Weaknesses Out of the box, WordPress provides great flexibility in terms of organizing and managing your blog’s content. Much of this flexibility comes by way of WordPress’ category and tag architecture. Each and every post created on your blog may be assigned to any number of both categories and tags. Categories are meant to classify content according to broad definitions, while tags are used to classify content more specifically. For example, a post about your favorite movies may be categorized in the “Favorite Movies” category, while being tagged for some of the movies featured in the article: “Star Wars,” “The Matrix,” and “Blade Runner.” Beyond this central organizing principle, WordPress brings with it many strengths and weaknesses in terms of how content is organized and made available to both users and the search engines. Let’s examine some of these SEO factors before digging into the fine art of optimizing your WordPress-powered site for the search engines. 8.1.2 Strong Focus on Content Content, as they say, is king. The Web exists because of it. Users are searching for it. Search engines are built on it. In order to succeed on the Web, your site should be focused on delivering useful content above all else. Awesomely, one of the main goals of WordPress is to make publishing content as easy as possible. 287
- Once WordPress is set up, getting your content online happens as fast as you can create it. On the front end, there are hundreds of top-quality themes available, each focused on organizing and presenting your content with both users and search engines in mind. 8.1.3 Built-In “nofollow” Comment Links Perhaps not as useful as originally conceived, nofollow attributes placed on WordPress + nofollow commentator links have long been thought of as an effective method of improving Check out Chapter 7.7.3 for the SEO-quality of WordPress-powered sites. For those of you who may be more information on nofollow, unfamiliar with the whole “nofollow” thing, for now suffice it to say that nofollow WordPress, and the search attributes are placed on links to prevent search engines from following those links engines. to their targets. Originally, this was intended to serve as a way to conserve valuable page rank, but after it was revealed that this method no longer works, nofollow commentator links may be a moot point. We’ll discuss this more in-depth later on in the chapter. 8.1.4 Duplicate Content Issues While the organizational strengths of WordPress are great for managing content, it also comes with a price: duplicate content. Duplicate content is essentially identical content appearing in more than one place on the Web. Search engines such as Google are reported to penalize pages or sites associated with too much duplicate content. Returning to our movie example for a moment, our WordPress-powered site may suffer in the search rankings because identical copies of our movie article are appearing at each of the following URLs: • original article -> http://example.com/blog/my-favorite-movies/ • category view -> http://example.com/blog/category/favorite-movies/ • star-wars tag view -> http://example.com/blog/tag/star-wars/ • the-matrix tag view -> http://example.com/blog/tag/the-matrix/ • blade-runner tag view -> http://example.com/blog/tag/blade-runner/ 288
- Yikes! And if that weren’t bad enough, we also see the exact same post content appearing at these URLs: • daily archive view -> http://example.com/blog/2009/02/02/ • monthly archive view -> http://example.com/blog/2009/02/ • yearly archive view -> http://example.com/blog/2009/ • author archive view -> http://example.com/blog/author/your-name/ Depending on your particular WordPress theme, this situation could be even worse. By default, all of your posts are available in identical form at each of the previous types of URLs. Definitely not good from a search-engine point of view. Especially if you are the type of blogger to make heavy use of tags, the number of duplicated posts could be staggering. 8.2.1 Controlling Duplicate Content Fortunately, WordPress’ poor handling of duplicate content is easily fixed. In fact, there are several ways of doing so. In a nutshell, we have plenty of tools and techniques at our disposal for winning the war on duplicate content: • meta nofollow tags • meta noindex tags • nofollow attributes • robots directives • canonical meta tags • use excerpts for posts So what do each of these sculpting tools accomplish and how do they help us eliminate duplicate content? Let’s take a look at each of them. 289
- 8.2.2 Meta noindex and nofollow Tags Meta nofollow tags are actually inline link elements located in the section of your WordPress pages. For example, in your blog’s “header.php” file, you may find something like this: This code tells search engines – specifically, Google, MSN/Bing, and any other compliant search engine – that the entire page should be indexed, followed, and archived. This is great for single post pages (i.e., the actual “My Favorite Movies” article posted in our example), but we can use different parameters within these elements to tell the search engines not to index, follow, or archive our web pages. Ideally, most bloggers want their main article to appear in the search results. The duplicate content appearing on the other types of pages may be controlled with this set of meta tags: Love Juice Anyone dipping into the murky waters of SEO will inevitably discover that there are many ways to refer to the SEO value of web pages. PR, Page rank, Here, we are telling the search engines not to include the page in the search link equity, link juice, page engine results, while at the same time, we are telling them that it is okay to juice, link love, love juice, rank crawl the page and follow the links included on the page. This prevents the page juice, and just about any other from appearing as duplicate content while allowing link equity to be distributed combination of these terms throughout the linked pages. Incidentally, we may also tell the search engines to is known to refer to the same neither index nor follow anything on the page by changing our code to this: thing: the success of a web page in the search engines. 290
- So, given these meta tags, what is the best way to use this method to control duplicate content on your WordPress-powered site? We’re glad you asked. By using a strategic set of conditional tags in the “header.php” file for your theme, it is possible to address search-engine behavior for virtually all types of pages, thereby enabling you to fine-tune the indexing and crawling of your site’s content. To see how this is done, consider the following code: Tell Search Engines not to Index a Specific Post In this section we see how to disable search-engine indexing for different categories, archives, pages, and other page views, but what if we want to prevent indexing of only one specific post? There are several SEO plugins that enable this functionality, but you don't really need one to do it. All you need to do is get the ID of the post for which you would like to disable indexing. Then, open your theme’s header.php file and place this snippet within the section:
- The conditional PHP tags used in this example effectively are saying: “If the current page is the home page or the single post page, then allow the search engines to both index and follow the content of the page; otherwise, since the page is neither the home page nor the single post page, it is probably a tag page, category page, or other archive page and thus serves as duplicate content; therefore tell the search engines to follow all links but not index the content.” Of course, it isn’t always a bad thing to have some peripheral pages indexed in the search engines. In addition to having their home page and single pages indexed (as in our example above), many people prefer to have either tag pages or category pages (or both!) indexed in the search engines as well. In reality, the types of pages that you want indexed are completely up to you and your personal SEO strategy. Prevent Duplicate Content Caused by Paginated Comments Since WordPress 2.7, comments may be paginated, such that “x” number of comments appear on each page. While this is a step in the right direction, there may be a duplicate content issue resulting from the fact that your post content will appear on every page of your paginated comments. To resolve this issue, place the following code into your functions.php file: Further Information // prevent duplicate content for comments function noDuplicateContentforComments() { For more information and techniques on paged comments and global $cpage, $post; duplicate content, check out Chapter 7.3.3. if($cpage > 1) { echo "\n".''."\n"; } } add_action('wp_head', 'noDuplicateContentforComments'); This code will generate canonical links for your all of your paginated comments. The search engines will then use this information to ensure that the original post permalink is attributed as the actual article. 292
- So, if you would like to include both tag and category pages in the search results, you would simply modify the first line Exclude Admin Pages of our previous example like so: from Search Engines of pages to your meta-tag strategy by using any of the following template tags: 8.2.3 Nofollow Attributes • is_home() • is_page() Another useful tool in the fight against duplicate WordPress • is_admin() • is_author() content is the controversial “nofollow” attribute. The nofollow attribute is placed into hyperlinks like this: • is_date() • is_search() • is_404() • is_paged() This is a "nofollow" hyperlink • is_date() Links containing the nofollow attribute will not be To target any date-based archive page “followed” by Google, but may still be indexed in the search (i.e. a monthly, yearly, daily or time-based results if linked to from another source. Because such links archive) that is being displayed, use this: are not followed, use of the nofollow attribute is an effective tool in the reduction and prevention of duplicate content. • is_archive() For an example of how nofollow can be used to help Remember, there are more than just eliminate duplicate content, let’s look at a typical author date-based Archives. Other types of archive. In the author-archive page view, you will find Archive pages include sequential displays exact replicas of your original posts (unless you are using of category, tag, and author pages. excerpts). This duplicate content is highly avoidable by is_archive() will target all of these page simply “nofollowing” any links in your theme that point to types. the author-archive page view. Here is how the nofollow link would appear in your theme files: And of course there are many more types of these conditional tags available to This link will not be followed to the more information: http://digwp.com/u/3 author archive 293
- Exclude Specific Pages from Search Engines Ever wanted to keep a few specific pages out of the Exclude a specific category search engines? Here's how to do it using WordPress’ excellent conditional tag functionality. Just place your choice of these snippets into the section of your header.php file and all compliant search engines (e.g., Google, MSN/Bing, Yahoo!, et al) will avoid the specified page(s) like the plague. Exclude a specific tag This menu of snippets provides many specific-case scenarios that may be easily modified to suit your needs. Exclude a specific post Exclude multiple tags Exclude a specific page Exclude posts tagged with certain tag(s) 294
- Granted, using nofollow links to control duplicate content is not 100% foolproof. If the author-archive URL is linked to from any “followed” links, that page still may be indexed in the search engines. For pages such as the author archives that probably aren’t linked to from anywhere else, nofollow may help prevent a potential duplicate-content issue. 8.2.4 Robots.txt Directives Another useful and often overlooked method of controlling duplicate content involves the implementation of a robots.txt file for your site. Robots.txt files are plain text files generally placed within the root directory of your domain. http://domain.tld/robots.txt Robots.txt files contain individual lines of well-established “robots” directives that serve to control the crawling and indexing of various directories and pages. Yahoo! Disobeys Search engines such as Google and MSN that “obey” robots directives periodically Sadly, when it comes to search read the robots.txt file before crawling your site. During the subsequent crawl of engines that comply with your site, any URLs forbidden in the robots.txt file will not be crawled or indexed. robots.txt directives, Yahoo! Keep in mind, however, that pages prohibited via robots directives continue to falls far short: consume page rank. So, duplicate content pages removed via robots.txt may still be devaluing your key pages by accepting any link equity that is passed via http://digwp.com/u/218 incoming links. Even so, with other measures in place, taking advantage of robots.txt directives is an excellent way to provide another layer of protection against duplicate content and unwanted indexing by the search engines. Not Foolproof Let’s look at an example of how to make a useful robots.txt file. First, review the Pages blocked by robots.txt default directory structure of a WordPress installation in the screenshot (next page). directives may still appear within the index if linked to by For a typical WordPress installation located in the root directory, there is no reason “trusted, third-party sources.” for search engines to index URLs containing any of the core WordPress files. So we http://digwp.com/u/219 begin our robots.txt file by writing: 295
- Disallow: /wp-* Disallow: *.php These two lines tell compliant search engines to ignore any URL beginning with “http://domain.tld/wp-” or ending with “.php”. Thus, all of our core WordPress files are restricted and will not be crawled by compliant search engines. Now, consider some of the types of WordPress-generated URLs that we don’t want the search engines to follow or index: http://domain.tld/feed/ - your site's main feed http://domain.tld/comments/feed/ - your site's comments feed http://domain.tld/other/feeds/ - every other type of feed http://domain.tld/post/trackback/ - every trackback URL on your site http://domain.tld/2008/08/08/ - archive views for every day http://domain.tld/2008/08/ - archive views for every month http://domain.tld/2008/ - archive views for every year Of course, there are other types of pages which we may also wish to exclude from the search engines, such as category and tag archives, but you get the idea. To prohibit robots-compliant search engines from accessing and indexing the miscellaneous pages listed above, we add these directives to our robots.txt file: Disallow: */feed* Disallow: */trackback* Disallow: /20* 296
ADSENSE
CÓ THỂ BẠN MUỐN DOWNLOAD
Thêm tài liệu vào bộ sưu tập có sẵn:
Báo xấu
LAVA
AANETWORK
TRỢ GIÚP
HỖ TRỢ KHÁCH HÀNG
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn