internal linking link juice sculpting
DESCRIPTION
Internal Linking Link Juice Sculpting. SMX East October 7, 2008. The Syntax. NoFollow Attribute NoFollow Metatag NoIndex MetatagTRANSCRIPT
Internal LinkingLink Juice Sculpting
SMX EastOctober 7, 2008
The Syntax
• NoFollow Attribute– <a href=“/someurl.html” rel=“nofollow”>
• NoFollow Metatag– <meta name="robots" content="nofollow">
• NoIndex Metatag– <meta name="robots" content="noindex”>
• Combined NoFollow and NoIndex– <meta name="robots" content="noindex,
nofollow">
Robots.txt Syntax
• User-agent: *• Disallow: /
– Prevents all crawling of your site
• user-agent: *• disallow: /test/• disallow: /cgi-bin/
– Prevents crawling of your test and cig-bin folders
Link 1 YES
YES
YES
YES
Link 2
Link 3
NoIndex Illustrated
SE Index
SE Robot
NO
Link 1NO
NO
NO
YES
Link 2
Link 3
NoFollow Metatag Illustrated
SE Index
SE Robot
YES
Link 1 NO
YES
YES
YES
Link 2
Link 3
NoFollow Attribute Illustrated
SE Index
SE Robot
YES
Link 1 NO
NO
NO
NO
Link 2
Link 3
Robots.txt Illustrated
SE Index
SE Robot
YES
Duplicate Content Scenario
• PR 7 Site - aggregates press release content• About 60% the press releases show up on
other sites• Value add of the site was to aggregate and
organize the press releases• Google was having none of it
– Hit by an algorithmic penalty
Duplicate Content Solution• NoIndex the pages
– Let crawler find and remove the pages
• Can speed this up using URL Removal Tool in WMT
• Once removed, NoFollow links to the pages• DON’T use Robots.txt.
– What if someone else links to them?– Want it to be able to pass juice
Content Syndication
• Syndicate tens of thousands of pages• Exact copies of content for a major media site• To a PageRank 8 site• Sounds like major trouble, no?• SOLUTION:
– NoIndex the syndicated pages– Prevents dupe content problem– Still passes link juice
The Site that took the Content?
Link Building
• Ecommerce site• Hard to get links for it• SOLUTION:
– Build a rich content tree– Get links to that– Incorporate in the content tree links to key parts
of the site– NoFollow all the other links
Link Building for E-Commerce
LinksE-Comm
Quality Content
Other Site Pages
Very Basic Sculpting
What They Did
This Example From Citysearch
Http and Https Dupe Content• E-commerce site• User lands on http://www.yourdomain.com• Puts a product in the shopping cart• Goes to check out• Get sent to
https://www.yourdomain.com/shoppingcart.asp• That page uses relative links (instead of
absolute)
Http and Https Dupe Content - 2
• Example link:– <a href=“/about-us.asp”> - Instead of:– <a href=http://www.yourdomain.com/about-us.asp>
• Click About Us on the home page:– <a href=http://www.yourdomain.com/about-us.asp>
• Click About Us on the shopping cart page:– <a href=https://www.yourdomain.com/about-us.asp>
Http and Https Solution
• Implement https://robots.txt to Disallow crawling of https pages.
• NoFollow all links to the https pages.• Make sure you use https only where you really
need it
Thank You!
Eric EngePresidentStone Temple [email protected]