15 years of excellence
Get a QuoteReady to get started on your project?

If we have a website, we definitely need it to be a friend of search engines. There are several ways to attract visitors to our website, but in order to make searchers know about our website, search engine is the tool where we need to prove our contents. If we are just having a static HTML content, then there is no much problem in promoting it. But where in today’s world of Content Managed Websites and eCommerce Portals we need to look further and implement a few more techniques in order to make the site more prominent to robots. In this article we will discuss how we can develop a SEO Friendly website where the content is driven from the Database with a Content Management System which is developed using ASP.NET. We will learn to build a simple CMS driven site with no nonsense URL, which Search Engines invite.

Why ASP.NET?

ASP.NET offers is robust server side scripting language which offers several advantages and also helps us to create Search Engine Friendly Content Managed Websites. The following are the set of features / components which i am going to use in order to create a scalable SEO Friendly Website.

  • Master Pages
  • HTTP Modules
  • HTTP Handlers

Master Pages

Master Pages are a new set of features that enables us to keep the site layout consistent across the site and any changes to the site layout can be made easily at a single point. A master page allows us to separate the Application Layer and the Presentation layer. It operates much like the SMARTY Templates with PHP.

Master Pages allows us to create a more robust Content Management System which allows us to build a more search engine friendly website easily with the use of templates.

How to Work with Master Pages?

  • Create a website from the project menu.
  • Add a Master Page by selecting Add Item (By this time it will be good if you have your HTML Template Ready).

If you plan to have a vastly different homepage and inner page, then you need to have separate master pages accordingly. You can also have nested master pages.

After adding the master page to your project, you need to decide on which part of the website is going to be static across the pages, in most of the websites the top portion of the website will have the navigation and it will be common through out the site, in this example we are going to assume the same type of layout.

Work with Master Pages


The above picture shows the layout that we had taken for this example, here we consider two content place holders one for the left and the other one is for the main page.

Once you had created the Master Page, next step is to add the Content Pages (Note that the master pages are still a template and the content pages are the one which will be served to the users. The ASP.NET Engine while rendering the page will combine the master pages and the content pages and present it to the end user.

Since we are going to create a CMS based website, we are going to have just one content page in this example which you can use it across the site. The below is the sample code for the content pages.

<%@ Page Language="C#" MasterPageFile="~/InnerPage.master" AutoEventWireup="true" CodeFile="Default2.aspx.cs" Inherits="Default2" Title="Untitled Page" %>
<asp:Content ID="ContentLeft" ContentPlaceHolderID="Left" Runat="Server">
</asp:Content>
<asp:Content ID="ContentMiddle" ContentPlaceHolderID="Middle" Runat="Server">
<div class="breadCrumb"><asp:SiteMapPath ID="BreadCrumb" runat="server">
</asp:SiteMapPath></div><div class="Content" runat="server"></div>
<asp:SiteMapDataSource ID="CrumbSource" runat="server" StartingNodeUrl="~/default.aspx" />
</asp:Content>
You can notice the content pages are just having the Content Elements and the controls relevant to that page.

Setting up Page Title, Meta Tags Dynamically in ASP.NET Content Pages

Make sure that you are keeping the Page Title, META tags and Descriptions are in place so that is more visible to the search engines. To do this we have two ways one is to place everything in the Master Pages with default values. The other option is to add all these meta tags in every page.

To update the meta tags in the content pages, we need to make sure that we are having access to the master page elements, which can be gained with help of the directive.

<%@ MasterType VirtualPath="~/MainLayOut.master" %>

By this way we can ensure that all the master page elements can be accessed by Master.ElementName.

What we had done is by having a property for each meta element, we can just set the value.

public virtual String MetaKeyword {
    get {
       _MetaControl = (HtmlMeta)this.Page.Header.FindControl[“MetaKeyword”]
      return (_MetaControl.Content);
    }
    set {
      _MetaControl = (HtmlMeta)this.Page.Header.FindControl[“MetaKeyword”];
      _MetaControl.Content = value;
    }}

The above property is to get / set value for the keyword meta tag. In the above example we need to make sure that Meta tag has an ID and marked as runat=server.

<meta name="keyword" content="Keyword1, Keyword3" id=”metaKeyword”>

Similarly we can have it for meta description and other tags as desired.

In the content pages, we can set the title and the meta tags based on the content requested

protected void Page_Load(object sender, EventArgs e)
{
// Master references the Master Page of this page
Master.MetaKeyword = "Keyword1, Keyword2, Keyword3";
Master.MetaDescription = "";
this.Header.Title = "Welcome to ASP.NET CMS Websites";
}


The Virtual Pages / SEO Friendly URL’s

Consider the website is having a menu structure as like below Services

  • Website Development
  • Search Engine Optimization
  • Flash Development

Normally what we do is when the user clicks on the services page, we will redirect him to the services page, and when the user click on the website development we will pass the querystring as services.aspx?WebsiteDevelopment. But all the search engines hesitate to respect such messy URL's, so what we are going to do is, we will be creating a URL, which is friendlier to the Search Engines and the CMS Panel should have an option for giving the URL for each content pages.

We will now map each virtual URL to actual URL's in a database or in a XML File. Here is an example below where an XML file which will be updated when pages are added to the site.

<SEOURL>
<URL name="services.aspx?WebsiteDevelopment" VUrl="offshore_offshore-cms-development.aspx"></url>
<URL name="services.aspx?SEO" VUrl="offshore_SEO.aspx"></url>
</SEOURL>


If you are having a considerably large no. of pages, It is good to store them in a database, just to gain more performance. You can also map and match URL’s using regular expressions.

The Next Step is to write a HTTPModule to manage all these virtual URL's and to deliver the contents from the real URL.

  1. Create a Separate Project of type Class Library
  2. Add a class that inherits HTTPModule

namespace URLHooking
{
public class PageSource : IHttpModule
{
public PageSource(){

}

public void Init(System.Web.HttpApplication application)
{
//IHTTPModules Init Function where we declare the delegate for the PostResolveRequestCache event.

application.PostResolveRequestCache += (new EventHandler(this.Application_OnAfterProcess));
}

private void Application_OnAfterProcess(Object source, EventArgs e)
{
HttpApplication application = (HttpApplication)source;
HttpContext context = application.Context;
if (!System.IO.File.Exists(application.Request.PhysicalPath)) {
tmpRealURL = GetRealPathFromXML(application.Request.PathInfo); // Parses the XML File and Get the Real URL
context.RewritePath(tmpRealURL);
}
}

}
}


Now, Save this project as DLL.

Next step is to configure this DLL as HTTPModule in our website project

  1. Open the Web.Config file in the website project
  2. Under the Configuration / System.web place this configuration
<httpModules>
<add name="PageSource" type="URLHooking.PageSource, URLHooking"/>
<!-- name=”Class Name” type=”NameSpace.Classname, AssemblyName” -->
</httpModules>

After adding these to the configuration file compile the project.

flow_file_compile

When the user requests a page, it first looks whether that page exists physically. If it doesn't exists it will then checks the XML file to find the actual URL and once it finds, the rewrite path method transfers the execution to the page pointed in there. The important aspect of this is the browser / client requesting the page doesn't know the redirections and its all processed in the server.

<
articles



Please fill in the fields below and click the "Send" button.
We will e-mail a link of this article.

Your Name
From Email Address
To Email Address
Subject
Please note: We will not spam or share your Email Address
Quick Enquiry

Related Readings

Verified Company

Inc.com Verified

Case Studies

Website details: A blog site with social networking features and user..

Challenge: This is a social community website, with..

More Case studies..

Recent Blog

What our clients Think

CLIENTS

  • todd
  • RIDGE WEB DESIGN
  • DUNKIN BRANDS
  • Store Point
  • HARVEST
  • Open Solutions

Expertise

  • WordPress
  • Joomla
  • Prestashop
  • CodeIgniter
  • Laravel
  • Responsive Web Design
  • Mobile Apps
  • HTML5