Tagged: Sitecore

Sitecore Media Library -> Cloud

By default all of Sitecore’s images are stored in the database, and retrieved on the fly when an image is requested via the media library. There are definitely reasons why this is a bad idea, and can be improved upon, and much has been written on the web about how to move Sitecore’s images from the database to the cloud. There are specific reasons to do this – reduce the size of the content database, reduce page load times and to reduce database hits. I’ve taken a look through a few options for porting images to Windows Azure Blob Storage (WABS)*, and wanted to outline what I saw as the approaches that people have taken, and how we ended up solving this problem.

*Other providers like AS3 are also available

The options for achieving this I have read about so far are broadly:

  1. Swap out the SqlserverDataProvider for your own subclassed provider that pumps the blob data out to azure with a GUID for its name, reference.
  2. Reroute the media library to an Azure storage blob using ARR or something similar. Example using Sitecore’s own configuration to reroute.

When I came to investigate these approaches, I found slight problems with both of them, and hence developed a slightly different approach, which is outlined below.

Option 1 – Swapping out the data provider

The first approach is broadly elegant, and offers some significant benefits, but unfortunately also has some drawbacks. The benefits are that, being low-level, it should preserve Sitecore’s image resizing capabilities via the pipeline steps, and it allows us to remove the images entirely from the database, thereby minimizing the size of the database, making for easier portability. It also would allow logic to be added that only pulls images from Azure if they exist, and fallback to the database in the event that no image has been uploaded, making it more robust.

The main drawback with this approach is that because we are hooking into quite a low-level part of Sitecore, in the SqlserverDataProvider, the functions to read / write the blob only get limited information about it – a GUID which would identify that blob in the SQL table, and the data itself. This leaves a problem whereby once all your images are published to the cloud, they don’t have the same item hierarchy (folder path) that would have been present in Sitecore, and worse, they don’t necessarily have the correct extension. So whilst this solution works acceptably, it’s not easy to see what has happened if an image is missing, and eventually you will have one container with potentially thousands of
unstructured images in it, all named by
GUIDs.

I spent some time trying to amend this solution to save / retrieve the images via a path rather than a GUID, and there are some options here. Remember the GUID you have is not the ID of the Media item, it’s actually the ID of the blob in the database, so it’s not so easy to get to the item from it to lookup the path.

The first option is to lookup the path from the GUID via a database hit. There’s some SQL below which should do it, but I don’t like this solution. You’re getting rid of one database hit, and introducing another. Also it feels “dirty”. There are further options – you could create and maintain some sort of lookup and cache these hits, but the whole thing starts to feel pretty messy at this point.

SELECT top 100 
 *         
FROM 
 Items I
 Join SharedFields S on S.ItemId = i.ID and s.fieldid  = '{40E50ED9-BA07-4702-992E-A912738D32DC}' 
 left Join Blobs B on S.Value = B.BlobId 
Where 
 s.value = '{B018B71D-681E-4771-88E6-EFF99994F979}'        
order by 
 i.created desc 

The second option I looked into was to try and hook into the process at a higher level where the Media item GUID is still available to use. Looking in the call stack for the ‘SetBlobStream’ function, we see the below.

Sitecore Callstack

To get the full path for an item when calling SetStream, we would need to get into this call stack a bit higher – ideally at the MediaData / Media class. There is some config that looks like it might wire this up in Sitecore:

    <mediaLibrary>
      <!-- MEDIA PROVIDER
         The media provider used to generate URLs, create media items, control media caching, parse media requests, and other
         media related functionality.      
      -->
      <mediaProvider type="Sitecore.Resources.Media.MediaProvider, Sitecore.Kernel" />
      <!-- MEDIA REQUEST PREFIXES 
           Allows you to configure additional media prefixes (in addition to the prefix defined by the Media.MediaLinkPrefix setting)
           The prefixes are used by Sitecore to recognize media URLs. 
           Notice: For each custom media prefix, you must also add a corresponding entry to the <customHandlers> section 
      -->
      <mediaPrefixes>
        <!-- Example
        <prefix value="-/media"/>
        -->
      </mediaPrefixes>
      <requestParser type="Sitecore.Resources.Media.MediaRequest, Sitecore.Kernel" />
      <mediaTypes>
        <mediaType name="Any" extensions="*">
          <mimeType>application/octet-stream</mimeType>
          <forceDownload>true</forceDownload>
          <sharedTemplate>system/media/unversioned/file</sharedTemplate>
          <versionedTemplate>system/media/versioned/file</versionedTemplate>
          <metaDataFormatter type="Sitecore.Resources.Media.MediaMetaDataFormatter" />
          <mediaValidator type="Sitecore.Resources.Media.MediaValidator" />
          <thumbnails>
            <generator type="Sitecore.Resources.Media.MediaThumbnailGenerator, Sitecore.Kernel">
              <extension>png</extension>
              <filePath>/sitecore/shell/themes/Standard/Applications/32x32/Document.png</filePath>
            </generator>
            <width>150</width>
            <height>150</height>
            <backgroundColor>#FFFFFF</backgroundColor>
          </thumbnails>
          <prototypes>
            <media type="Sitecore.Resources.Media.Media, Sitecore.Kernel" />
            <mediaData type="Sitecore.Resources.Media.MediaData, Sitecore.Kernel" />
          </prototypes>
        </mediaType>

However, I found that when I changed the type that mediaData should link to, the changes had no impact. I could see my class being instantiated at points during the rendering of an image, but unfortunately it wasn’t instantiated from the Media class, which is what I needed. Looking at the class, it has an injected reference to the MediaData class, but I can’t see where I can influence this in config, and I suspect it can’t be easily done. At this point I decided that this was probably a dead end for me, and there were easier ways to get images working in cloud storage. So I moved on to looking at other options.

Option 2 – Rerouting using Active Rewrite Rules

An alternative to the above is to use some mechanism to push images to the cloud, and then reroute from the browser requests for media library URLs to the cloud, therefore bypassing Sitecore’s own media handler.

In order to achieve the first part of this solution and push images to the cloud, it made the most sense to follow the method outlined here – a publishitem pipeline step. This pipeline step is quite simple, all it does is check whether a published item is a media item, and if so push it up to the cloud if the item has been updated / added. A code sample from our solution is below – the IImageStore interface / implementation are not provided, but hopefully it’s still clear what this is trying to do.

public class PublishItemProcessor: Sitecore.Publishing.Pipelines.PublishItem.PublishItemProcessor
{
	private readonly IImageStore _imageStore;

    public PublishItemProcessor(): this (IoC.Unity.Resolve&lt;IImageStore&gt;())
	{

    }

    public PublishItemProcessor(IImageStore imageStore)
	{	
		if (imageStore == null) throw new ArgumentNullException(“imageStore”);
		_imageStore = imageStore;
	}

    public override void Process(PublishItemContext context)
	{
		var target = context.PublishOptions.TargetDatabase.GetItem(context.ItemId,context.PublishOptions.Language);
		if (target == null || !target.Paths.IsMediaItem) return;

        var mediaItem = new MediaItem(target);
		switch (context.Action)
		{
			case PublishAction.PublishVersion:
			case PublishAction.PublishSharedFields:
				_imageStore.Add(mediaItem);
				break;
		
            case PublishAction.DeleteTargetItem:
				_imageStore.Remove(mediaItem);
				break;

		}
	}
}

The imagestore implementation here only knows that it takes a media item and publishes it to the cloud. Therefore – update or add – the media in the cloud will be overridden. Unfortunately we found a slight idiosyncrasy here, in that it looks like the DeleteTargetItem PublishAction never fires. This didn’t turn out to be a significant problem, it may be necessary to add a clean-up step at a later point that goes through the Azure Storage container and removes any orphaned items, but for now the orphaned items don’t do any harm. This publish pipeline step is configured as per the article referenced above, so I won’t repeat that configuration here.

The second part of the solution was to rewrite requests for http://<servername>/~/media/ to https://<azure_storage_name>/media/, thereby ensuring images are now served from the cloud rather than pulled from the database. We found we had one additional requirement – to still allow Sitecore to serve the images where those images are being re-sized by the server. This is largely a backwards compatibility concern, but again could be achieved using ARR. The rule that was applied is broadly as below:

    <rule name=CloudImages stopProcessing=true>
      <match url=~/media(?:/(.+.(?:jpg|jpeg|png|gif|bmp))) />
      
      <!– we still want sitecore’s image resizing functionality – so don’t root for requests where this is being invoked.   –>
      <conditions logicalGrouping=MatchAll trackAllCaptures=true>        
        <add input={QUERY_STRING} negate=true pattern=(?:.(h=|w=|bc=|width=|height=)) />
      </conditions
   
      <action type=Redirect redirectType=Permanent url=https://<cloud.server>/media/{R:1} />
    </rule>    
    

This rule matches all images served from the media library, with the listed extensions, and serves them from a cloud server rather than the Sitecore instance. Having configured this, voila! Images are now stored in the cloud as well as the database. This allows us to take some load off the Sitecore database, with minimal interruption and fuss, and should improve page load time when Sitecore is heavily contended as well.

Further work:

In an ideal world, the following requirements would additionally be satisfied by this solution:

  1. Backwards compatibility, Sitecore can fall back to the database where an image has failed to upload to the cloud.
  2. Image resizing / other pipeline steps can still be integrated where necessary, without fetching these images from the database.
  3. Image remove / publish deletes redundant images from the cloud.

These requirements may be looked at as part of a refinement to this solution at some point in the  future, but for now they are not considered so important, so we will press on with this solution. Feel free to sound out other articles / approaches you consider effective here in the comments section!

Installing the Sitecore Digital Marketing System

The Sitecore Digital Marketing System (DMS) slots in alongside the Sitecore Content Management System (CMS), offering an array of additional features for your website such as multivariate testing, profiling and personalizing the customer journey, launching campaigns and setting business goals. The combination of the CMS and DMS is referred to as the Sitecore Customer Engagement Platform (CEP).

This article highlights some of the technical considerations we encountered in getting the DMS enabled on a high traffic, high availability website, so that we could start utilising its marketing tools.

Prerequisites

First off, since Sitecore are currently binding new releases of the DMS with the CMS, we had to upgrade our version of the CMS before taking advantage of the latest DMS features.

Our existing version of the CMS was:

  • Sitecore 6.5.0 Update-3 (rev. 111230), DMS 2.0.0 (rev. 111230)

And the upgrade options we considered for the CMS were:

  • Sitecore CMS and DMS 6.6.0 rev. 130529 (6.6.0 Update-6)
  • Sitecore CMS and DMS 7.0 rev. 130424 (7.0 Initial Release)

We decided to upgrade to Sitecore 6.6 (update 6) as it gave us the DMS features we desired (such as improvements to the Executive Insight Dashboard, support for a separate reporting database, plus a number of fixes) as well as improvements to the CMS (incl. performance gains – more on that below).

We decided to hold off upgrading to Sitecore 7 for now (which offers, amongst other improvements, the use of item buckets to store vast amounts of child content without it having to display in the content tree, along with searching and indexing features) as it requires a shift to .NET Framework 4.5 and Visual Studio 2012, which we were yet to roll out due to other factors in our organisation.

Note: regarding the DMS fixes, we had previously attempted to enable DMS 2.0.0 rev. 111230 on our site – however, we encountered issues during load testing regarding database locking and long-running database queries, so took the decision not to proceed.

During the CMS upgrade we made use of the following resources:

Installation guide: http://sdn.sitecore.net/upload/sitecore6/66/installation_guide_sc66-a4.pdf

Release history (Sitecore login required): http://sdn.sitecore.net/Products/Sitecore%20V5/Sitecore%20CMS%206/ReleaseNotes/ChangeLog/Release%20History%20SC66.aspx

We deployed new Sitecore databases (core, master, web – adopting the Sitecore version number into our database naming convention) into each environment alongside our existing Sitecore databases (which were earmarked for decommissioning once upgrade complete). A content package was created from our existing Sitecore Editorial for population into these new databases (with our Sitecore editors informed of a content freeze whilst the upgrade took place). Our Sitecore web application was updated to target the new Sitecore binaries, as well as the various configuration changes highlighted in the release notes. A fresh client install was carried out on each of our environment web servers (uninstalling existing client and re-hardening security on the new client).

Note: we had investigated Sitecore’s ‘upgrade a previous Sitecore CMS 6 version to this release’ path (rather than implementing fresh installs of database/client) – however, with our existing client already security-hardened we were unable to use its upgrade tools.

We found a minor issue on upgrade: single-text fields would render html tags as text. Sitecore were aware of the issue and provided dll fix ‘Sitecore.Support.381846’ – however, we chose to update the fields to rich-text instead, to resolve.

During load testing, we found that our upgraded version of Sitecore was using roughly half as much CPU compared to the existing version. The performance benefits were tangible – our page download times had significantly improved.

Typical metrics from load testing show CPU level and test duration down on upgrade of Sitecore, confirming that our servers are less worked and our web pages are delivered to the customer faster:

Sitecore version Test settings Test Duration CPU level
6.5.0 Update-3 20 constant users, 5000 test iterations 18:00 min:sec 65.8%
6.6.0 Update-6 20 constant users, 5000 test iterations 12:21 min:sec 29.5%

Installation

With our CMS upgraded, we were now in a position to integrate our desired version of the DMS.

We installed the DMS Analytics database, along with a second copy which we would use for reporting (splitting the DMS into two databases allows for analysis of data using reporting tools without affecting site performance). The DMS also provides a SSIS (SQL Server Integration Services) package for replication of data between the Analytics and Reporting databases, plus a sql script for refreshing the Reporting database (note: the refresh can alternatively be configured in the Sitecore.Analytics.config file) – resources can be found at http://sdn.sitecore.net/SDN5/Reference/Sitecore%206/DMS%20Documentation.aspx

Our Sitecore web application required the following DMS config considerations:

Web.config: add attribute enableAnalytics=”true” to applicable <sites> (note: for our Editorial server we set value to false, as we did not want to track our editors activity)

Sitecore.Analytics.config: analytics sample rate could be throttled as desired (for example, setting the percentage value to 50 would mean only half of our website sessions would participate in DMS management)

  • <setting name=”Analytics.Sampling.Percentage” value=”100″ />

Sitecore.Analytics.config: the DNS lookup feature could be disabled if required

  • <setting name=”Analytics.PerformLookup” value=”false” />

ConnectionStrings.config: separate connection strings were utilized for the Analytics and Reporting databases

Layouts: our site layouts adopted a tag for robot detection

  • <sc:VisitorIdentification runat=”server”/>

We conducted load testing of our CMS before and after integration with the DMS, utilising the sample rate config to determine the effect on our website as more and more users participated in DMS management. We also load tested the effect on the site whilst DMS database truncation and replication were executed. As with previous load testing, we monitored the logs for errors and growth, profiled the database, monitored memory and performance counters on the servers – as well as analysing the test outputs under various conditions (number of constant users, test iterations, stress test, soak test). A record count of visits, page hits, etc confirmed that the Analytics database was storing data as expected. Integrating the DMS did not have any significant effect on performance, log file size or memory usage, although we did note that CPU increased by up to a quarter.

Typical metrics from load testing show CPU level and test duration slightly up on integration with DMS, confirming that although our servers are being worked a little harder our web pages are still being delivered to the customer in a timely manner:

Sitecore version Test settings Test Duration CPU level
6.6.0 Update-6(without DMS) 100 constant users, 3000 test iterations 17:40 min:sec 70%
6.6.0 Update-6 (with DMS 100% sample) 100 constant users, 3000 test iterations 18:11 min:sec 78%

.
Visual Studio 2010 was utilized for load testing. Several Web Performance tests were created, mimicking common customer website journeys (note these tests can be configured with variables to populate web forms and run under varying user accounts). Load tests were then created, allowing for a mix of Web Performance tests to be executed against a set number of constant users during a set number of iterations. For example, in the above typical metrics, we maintained a flow of 100 users against our site for a mix of 3000 customer journeys. To stress test we increased the number of constant users. To soak test we increased the number of iterations (so that it effectively ran all night).

Go-live strategy

Before switching on the DMS in our production environment, we put together an estimate of the record growth we would expect to see by calculating our current site traffic against the record counts we were experiencing in load test – this allowed us to provision the required database disc space.

We also opted for a soft-launch, going live with a sample rate of 10% and shifting to a full 100% rate once we were satisfied that the DMS integration had not raised any concerns (a rollback strategy was in place, utilising the config setting <setting name=”Analytics.Enabled” value=”false” />).

Our data replication between the Analytics and Reporting databases was also increased from daily to hourly after soft-launch, allowing for more frequent reporting.

Summary

The integration of the Sitecore Digital Marketing System (DMS) into the Sitecore Content Management System (CMS) offers a range of features for managing a customer’s experience with your website. Hopefully this article has gone some way to highlight the technical considerations in bringing the DMS on-board in a way which is thorough and sympathetic to the requirements of a high availability, high traffic environment.

Unit Testing with Sitecore

This post is going to cover how you can create highly testable code for Sitecore, in particular when using the WebForms.

The State of the Art

I’m sure everybody reading this already knows Sitecore is a powerful and hugely-capable CMS. I’m also sure that everybody reading this knows it is incredibly difficult to write unit tests for code that interacts with Sitecore. Let’s quickly recap why testing is so difficult:

  1. WebForms encourages violation of Single Responsibility Principle through mixing business and UI logic in control code-behinds
  2. Lack of interfaces throughout Sitecore API
  3. Heavily dependent on Sitecore.Context, effectively a global variable
  4. Lack of strong typing on Items and Fields, meaning propogation of magic strings

This list can be divided into two categories: problems caused by WebForms itself (point 1) and problems caused by Sitecore (points 2-4). You can solve each category independently, but both need to be solved if you wish to achieve high unit test coverage. We’ll cover solutions separately, and put them together at the end.

How do you solve a problem like WebForms?

WebForms is control-based. Pages are composed of increasingly-complex controls, arranged into a control tree. All code for a particular control is found its code behind. It’s so easy to dump all your code in the code-behind and be done with it. This is why you frequently see mixed concerns in WebForms application code. But there’s got to be a better way, right? We could, with masses of self-discipline, ensure a clear demarcation between business and UI logic. We could even create some Models to supply data to the View (a control in the case of webforms). This sounds reasonable, but hang on!

We’re talking views and models here. Doesn’t this mean we’re just creating a disorganised MV* framework? If that is the case, why don’t we formalise it and introduce a well-defined framework with clear and consistent development patterns?

MV*

The typical approach for structuring web apps for testability/separation of concerns is Model-View-Controller (MVC). However, MVC does not sit well with WebForms’ evented, control-based approach. A much better fit is the Model-View-Presenter (MVP) pattern, a derivative of MVC.

In MVP the View is the entry point, and has a well-defined interface. Views delegate to Presenters through events. The View and the Model interact through two-way data-binding. The Presenter’s job is to encapsulate business logic, update the Model and respond to events from the View.

As WebForms offers eventing and two-way databinding (through ObjectDataSource), the MVP pattern is a perfect fit. Whilst it is possible to write your own MVP framework, it makes more sense to have an off-the-shelf product such as WebFormsMVP. WebFormsMVP is a mature, battle-hardened and open-source implementation of the MVP pattern. As well as the basic MVP pattern, WebFormsMVP has adapters for common Dependency Injection containers, automatically resolving dependencies for you; and ships with a pub/sub message bus allowing for decoupled, cross-presenter messaging.

It’s also worth noting that usage of WebFormsMVP is opt-in. You can use it where and when you like, as you see fit. It doesn’t force the paradigm on the entirety of your application.

Working with WebFormsMVP

Installation instructions can be found in the WebFormsMVP readme.

Let’s run through a scenario. Imagine we are trying to create the following form:

Basic web form

Without any real consideration of architecture, our code might look something like this:

<p runat="server" id="message" class="msg"></p>

<fieldset>
<legend>User details</legend>
<div>
<label for="firstName">First name</label>
<asp:TextBox runat="server" ID="firstName" />
</div>
<div>
<label for="lastName">Last name</label>
<asp:TextBox runat="server" ID="lastName" />
</div>
<div>
<label for="telephoneNumber">Telepone</label>
<asp:TextBox runat="server" ID="telephoneNumber" />
</div>
<asp:Button runat="server" OnClick="OnUserSubmit" Text="Add user"/>
</fieldset>

And the code behind:

public partial class AddUser : System.Web.UI.UserControl
{

protected void OnUserSubmit(object sender, EventArgs e)
{
var user = new User
{
FirstName = firstName.Text,
LastName = lastName.Text,
TelephoneNumber = telephoneNumber.Text
};

var repo = new InMemoryUserRepository();
var success = repo.Add(user);

if(success)
{
message.InnerText = "User added successfully";
message.AddClass("msg--success");
}
else
{
message.InnerText = "Something went wrong";
message.AddClass("msg--error");
}
}
}

Despite this being an incredibly simple form, we’re already falling into some classic WebForms traps:

  • Manually hydrating objects from the form values.
  • Handling business logic of adding people to a simple InMemoryRepository
  • Violating Single Responsibility Principle – handling reporting back to user when it should be someone else’s job.
  • And we haven’t even got validation in place yet!

Thankfully, all of these can be remedied with the assitance of WebFormsMVP.

We already have our view in the AddUser control, so we need to create a model (specifically for the view, don’t confuse with domain models such as User), an interface for the View, a Presenter, and an object to encapsulate data passed with events.

Our model is really simple, all the AddUser control needs to render is a success/failure message. In WebFormsMVP models are simple POCOs.

public class AddUserViewModel
{
public string Message { get; set; }
public FormResult Result { get; set; }
}

public enum FormResult
{
None,
Success,
Error
}

Next we need to create an interface for the View. Our View exposes functionality to add users to our repository, so naturally our View needs an AddingUser event. In WebFormsMVP, the View’s interface should extend IView or IView<TModel>. The IView<TModel> interface includes access to a strongly-typed Model property, and a basic Load event:

public interface IAddUserView : IView<AddUserViewModel>
{
event EventHandler<AddUserEventArgs> AddingUser;
}

WebFormsMVP leverages the standard .NET eventing patterns. Therefore if data is neeeded to be passed with events, you should extend the EventArgs object and add any data there:

public class AddUserEventArgs : EventArgs
{
public User User { get; set; }
}

Now our View has a clean Model, a well-defined interface to the View and the ability to pass a User object to any subscribers of the View’s events. The only piece missing now is our Presenter. Presenters must extend the Presenter<TView> base class. Notice how our Presenter takes its dependencies as constructor parameters.

public class AddUserPresenter : Presenter<IAddUserView>
{
private readonly IUserRepository _repository;

public AddUserPresenter(IAddUserView view, IUserRepository repository) : base(view)
{
_repository = repository;
view.AddingUser += OnAddingUser;
}

private void OnAddingUser(object sender, AddUserEventArgs e)
{
var success = _repository.Add(e.User);

if(success)
{
View.Model.Result = FormResult.Success;
View.Model.Message = "User added";
}
else
{
View.Model.Result = FormResult.Error;
View.Model.Message = "Failed to add user";
}
}
}

Notice how practically all logic has moved from the original AddUser control into the AddUserPresenter. All that is left now, is to revisit the AddUser control and plumb in WebFormsMVP. Our control must implement the IAddUserView interface, extend MvpUserControl<TModel> and be bound to a presenter:

[PresenterBinding(typeof (AddUserPresenter))]
public partial class AddUser : MvpUserControl<AddUserViewModel>, IAddUserView
{
public void CreateUser(User user)
{
if (AddingUser != null)
{
AddingUser(this, new AddUserEventArgs {User = user});

//TODO: refactor into own presenter etc
message.AddClass(Model.Result == FormResult.Success ? "msg--success" : "msg--error");
message.InnerText = Model.Message;
}
}

\#region Implementation of IAddUserView

public event EventHandler<AddUserEventArgs> AddingUser;

\#endregion
}

The PresenterBindingAttribute tells WebFormsMVP which presenter is responsible for this View. This is not 100% necessary as WebFormsMVP contains a number of convention-based discovery strategies for Presenters, however I prefer the explicit binding that the attribute affords – so I follow this pattern instead.

Notice that we’re no longer hydrating our model manually, and that the CreateUser method takes a strongly-typed model. So how is this method invoked? It’s not a standard click event as we had previously (no sender or event object), this is actually one side of two-way data-binding . The other side is the use of ObjectDataSource (or more specifically the WebFormsMVP extension, PageDataSource) control. To get the two-way data-binding to we need our form to be wrapped in a data bound control such as FormView:

<p runat="server" id="message" class="msg"></p>

<fieldset>
<legend>User details</legend>

<asp:FormView ID="addUserFormView" runat="server" DefaultMode="Insert" DataSourceID="userSource" RenderOuterTable="False">
<InsertItemTemplate>
<div>
<label for="firstName">First name</label>
<asp:TextBox runat="server" ID="firstName" Text='<%# Bind("FirstName") %>'/>
</div>
<div>
<label for="lastName">Last name</label>
<asp:TextBox runat="server" ID="lastName" Text='<%# Bind("LastName") %>' />
</div>
<div>
<label for="telephoneNumber">Telepone</label>
<asp:TextBox runat="server" ID="telephoneNumber" Text='<%# Bind("TelephoneNumber") %>' />
</div>
<asp:Button ID="Button1" runat="server" CommandName="Insert" Text="Add user"/>
</InsertItemTemplate>
</asp:FormView>
</fieldset>

<mvp:PageDataSource runat="server" ID="userSource"
DataObjectTypeName="WebFormsLove.Core.Models.User"
InsertMethod="CreateUser" />

On the FormView, we set it’s DefaultMode to "Insert" and it’s DataSourceId to the appropriate name. We tell the PageDataSource control which model the form represents and which method to call for insertion, CreateUser. The submit button no longer has it’s own click event, but instead uses the CommandName property with a value of "Insert" so that it’s click can be mapped to the PageDataSource.InsertMethod property. Each textbox’s value is now bound to the appropriate parameters via a <%# Bind(“PropertyName”) %> expression. From this we get strongly typed data in and out of the form, with very little effort.

Whilst our code is now a lot cleaner, and responsibility divided more appropriately, one issue remains. The View is still responsible for reporting success back to the user. If we had a multitude of forms throughout our application, all requiring a consistent approach to reporting back to the user, then currently we would have to duplicate this code in many places. Instead of copy-paste, we can move the reporting into its own Presenter, and take advanatage of the WebFormsMVP message bus for cross-presenter communication. We can then reuse this Presenter throughout our application, increasing code reuse whilst keeping our code highly decoupled.

Again, we need to create a View (and associated interface), a Model and a Presenter. Our View is so simple it doesn’t explicitly need a a new interface, so we can just use the basic IView<TModel> interface directly.

public class FormMessageModel
{
public FormMessageModel()
{
Result = FormResult.None;
}

public FormResult Result { get; set; }
public string Message { get; set; }
}

public class FormMessagePresenter : Presenter<IView<FormMessageModel>>
{
public FormMessagePresenter(IView<FormMessageModel> view) : base(view)
{
view.Load += OnLoad;
}

private void OnLoad(object sender, EventArgs e)
{
Messages.Subscribe<FormMessageModel>(msg => { View.Model = msg; });
}
}

The FormMessagePresenter does very little other than listen for incoming messages on the bus. The subject of messages are object types (rather than string names as you may be used to in, say, a JS framework), and the message dispatcher is aware of type inheritance . If you subscribe to Object messages you will receive all messages sent by everyone, which is unlikely to be intentional! For this reason, messages should ideally be a custom type. Also, messages are always delivered, even if you subscribe after another Presenter has published a message.

Here’s the control for our FormMessagePresenter:

<asp:MultiView runat="server" ID="mvFormMessage">
<asp:View runat="server" ID="successView" >
<div class="msg msg--success">
<p><%= Model.Message %></p>
</div>
</asp:View>
<asp:View runat="server" ID="errorView">
<div class="msg msg--error">
<p><%= Model.Message %></p>
</div>
</asp:View>
<asp:View runat="server" ID="emptyView"/>
</asp:MultiView>

And the code-behind:

[PresenterBinding(typeof(FormMessagePresenter))]
public partial class FormMessage : MvpUserControl<FormMessageModel>, IFormMessageView
{
protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);

View view;
switch (Model.Result)
{
case FormResult.Success:
view = successView;
break;
case FormResult.Error:
view = errorView;
break;
default: //FormResult.None
view = emptyView;
break;
}

mvFormMessage.SetActiveView(view);
}
}

Now we have a nicely encapsulated FormMessagePresenter awaiting messages, we can modify our AddUserPresenter to Publish a message after adding a user:

public class AddUserPresenter : Presenter<IAddUserView>
{
private readonly IUserRepository _repository;

public AddUserPresenter(IAddUserView view, IUserRepository repository) : base(view)
{
_repository = repository;
view.AddingUser += OnAddingUser;
}

private void OnAddingUser(object sender, AddUserEventArgs e)
{
var success = _repository.Add(e.User);

var msg = success
? new FormMessageModel {Result = FormResult.Success, Message = "User added" }
: new FormMessageModel {Result = FormResult.Error, Message = "User add failed" };

Messages.Publish(msg);
}
}

And relieve the AddUser control of the responsibility of reporting back to the user. Note: since we have delegated responsibility for reporting back to the user to the FormMessagePresenter, the AddUser View no longer needs a Model.

public interface IAddUserView : IView
{
void CreateUser(User user);

event EventHandler<AddUserEventArgs> AddingUser;
}

[PresenterBinding(typeof (AddUserPresenter))]
public partial class AddUser : MvpUserControl, IAddUserView
{
public AddUser()
{
AutoDataBind = false;
}

public void CreateUser(User user)
{
if (AddingUser != null)
{
AddingUser(this, new AddUserEventArgs {User = user});
}
}

\#region Implementation of IAddUserView

public event EventHandler<AddUserEventArgs> AddingUser;

\#endregion
}

Testing

The whole point of MVP is to encourage testability, yet so far we haven’t seen a single test! Let’s remedy that and check out tests for the AddUserPresenter. We’re assuming usage of RhinoMocks for creating test mocks, but any equivalent library will do. Our tests should prove:

  • The presenter subscribes to all relevant events on the view
  • The presenter can add a user to repository, and reports success
  • The presenter handles failure to add a user to repository, and reports error
[TestClass]
public class AddUserPresenterTest
{
private IUserRepository _repo;
private IAddUserView _view;
private AddUserPresenter _presenter;

[TestInitialize]
public void TestInit()
{
_repo = MockRepository.GenerateMock<IUserRepository>();
_view = MockRepository.GenerateStub<IAddUserView>();
_presenter = new AddUserPresenter(_view, _repo)
{
Messages = MockRepository.GenerateMock<IMessageCoordinator>()
};
}

[TestCleanup]
public void TestCleanup()
{
_repo.VerifyAllExpectations();
_view.VerifyAllExpectations();
_presenter.Messages.VerifyAllExpectations();
}

[TestMethod]
public void ConstructorHooksUpEventHandlers()
{
// Arrange
_view.Expect(x => x.AddingUser += Arg<EventHandler<AddUserEventArgs>>.Is.Anything);

// Act
new AddUserPresenter(_view, _repo);
}

[TestMethod]
public void AddsUserToRepository()
{
// Arrange
var user = new User {Id = Guid.NewGuid()};

_repo.Expect(x => x.Add(user)).Return(true);
_presenter.Messages.Expect(x => x.Publish(Arg<FormMessageModel>.Matches(msg => msg.Result == FormResult.Success)));

// Act
// Remember, view delegaes to presenter.
// Therefore to test a method on the presenter you should raise an event on the view!
_view.Raise(x => x.AddingUser += null, _view, new AddUserEventArgs {User = user});
}

[TestMethod]
public void HandlesFailureToAddUser()
{
// Arrange
var user = new User {Id = Guid.NewGuid()};

_repo.Expect(x => x.Add(user)).Return(false);
_presenter.Messages.Expect(x => x.Publish(Arg<FormMessageModel>.Matches(msg => msg.Result == FormResult.Error)));

// Act
_view.Raise(x => x.AddingUser += null, _view, new AddUserEventArgs {User = user});
}
}

Testing the FormMessagePresenter would be a very similar process, so it’s not worth covering here.

Dependency Injection

Presenters should have all their dependencies passed into the constructor, the WebFormsMVP framework instantiates each Presenter after the view has loaded, so how does it resolve dependencies? The simple answer is: it doesn’t! WebFormsMVP comes with a number of adapters for popular IoC frameworks which it simply asks to resolve dependencies for. Configure all dependencies in your IoC container, as you normally would and then register the relevant WebFormsMVP adapter in your global.asax:

private void Application_Start(object sender, EventArgs e)
{
var unityContainer = ConfigureUnityContainer();
PresenterBinder.Factory = new UnityPresenterFactory(unityContainer);
}

private static UnityContainer ConfigureUnityContainer()
{
var unityContainer = new UnityContainer();
var section = ConfigurationManager.GetSection("unity") as UnityConfigurationSection;
if (section != null)
{
section.Configure(unityContainer);
}

return unityContainer;
}

Wrapping up WebFormsMVP

Our add user control is now incredibly simple – besides raising events it does very little. All our business logic is encapsulated in the AddUserPresenter and is entirely testable. Reporting success/failure has been refactored into a reusable Presenter/View/Model, FormMessagePresenter. Everything is decoupled by communicating strictly over the WebFormsMVP message bus. And finally, we have everything working with our favourite IoC container to have all dependencies injected into each Presenter.

Abstracting Sitecore’s API

With the MVP pattern we have seen how your controls can be organised to promote testability. However, the scenario we’ve been running through so far has had no interaction with Sitecore’s API. Imagine the form label text was pulled from Sitecore, meaning we have to consume Sitecore.Context.Item. We will either have to put this logic into our View (bad) or Presenter (untestable). How can we remedy this situation?

Let’s first imagine the following code, typical of a Sitecore solution:

Item context = Sitecore.Context.Item;
titleLiteral.Text = FieldRenderer.Render(context, "title");

This short snippet seems fairly innocuous, but it is problematic in many ways: tightly-coupled to concrete implementations, refers to the global state in Sitecore.Context and a magic string referring to a field name.

The naive approach to solving these issues is to manually wrap as much of Sitecore’s API as possible. This sounds reasonable, but you will quickly discover it is not a trivial task, requiring masses of boilerplate code. Perhaps then we should limit scope, focussing on the classes we use most regularly.

public interface IField
{
Field Original { get; }
string Id { get; }
string Name { get; }
string RawValue { get; }

string Render();
}

public interface IItem
{
Item Original { get; }
string Id { get; }

IField GetField(string name);
}

public interface IItemRetriever
{
IItem GetContextItem();

IItem SelectSingleByPath(string path);
IEnumerable<IItem> SelectByPath(string path);
}

Whilst these interfaces only cover a fraction of the Sitecore API’s surface area, they get us most of the way to breaking the hard dependencies on Sitecore types.

Some problems still exist however. We don’t have any strong typing on IItem, the magic strings still persist! Imagine this was an excerpt from a code behind:

var item = _itemRetriever.GetContextItem();
var titleField = item.GetField("title");
titleLiteral.Text = titleField.Render();

Why is it so dangerous to refer to fields by name? Suppose we refactor this Sitecore template, renaming the “title” field to something more appropriate. We would have to update every reference to that field in our code. Some references can easily be overlooked in this situation and this is where danger creeps in. Referring to a non-existent field in Sitecore does not cause a compile-time error, but a run-time error! Of course we want to avoid the possibility of run-time errors.

Strongly-typed Items

Strongly-typing Sitecore items would reduce the occurrence of this class of errors. So how might we achieve something like that? We could create an IItemMapper whose responsibility is to map each field from an item onto some model class.

public interface IItemMapper
{
TModel MapTo<TModel>(Item item) where TModel : IItem
}

Since we’ll now be working with strongly-typed items, we should modify some of our interfaces to allow arbitrary types to be returned:

public interface IItemRetriever
{
TModel GetContextItem<TModel>() where TModel : IItem;

TModel SelectSingleByPath<TModel>(string path) where TModel : IItem;
IEnumerable<TModel> SelectByPath<TModel>(string path) where TModel : IItem;
}

Before we discuss anymore it’s worth considering how a model may look, along with a concrete implementation of IItemRetriever. This will show us how everything is tied together.

public class FieldMapAttribute : Attribute
{
private string _name;

public FieldMapAttribute(string name)
{
_name = name;
}
}

public class MyItem : IItem
{
[FieldMap("title")]
public IField Title { get; set; }

// etc
}

public class ItemRetriever : IItemRetriever
{
private IItemMapper _mapper;

public ItemRetriever(IItemMapper mapper)
{
_mapper = mapper;
}

TModel GetContextItem<TModel>() where TModel : IItem
{
return _mapper.MapTo<TModel>(Sitecore.Context.Item);
}

// etc
}

How does this work? ItemRetriever delegates to IItemMapper. IItemMapper reflects on the type it’s given, searching for any usage of FieldMapAttribute. With a list of field mappings, the mapper can wrap fields in an implementation of IField and return a strongly-typed model ready for use.

var item = _itemRetriever.GetContextItem<MyItem>();
titleLiteral.Text = item.Title.Render();

Our code no longer exhibits any of the issues present in the original snippet: no globals, no magic strings, no concrete implementations!

Model Generation

We’ve discussed how, with a few classes, we can abstract away the vast majority of code consuming the sitecore API. But what about all the models? You may be working on a site with hundreds or thousands of templates, writing classes for every one of these would be tedious and error-prone. Besides, with the process of writing strongly-typed classes being so mechanical, it’s begging for some automation!

Text Template Transformation Toolkit (or T4 for short) is the perfect tool for automating creation of these classes.

Whilst there are a number of projects available for generating strongly-typed items, we chose to use Kern Herskind’s TDS T4 Model Generation project. TDS T4 Model Generation uses Team Development for Sitecore for it’s data store, and as we already use TDS it seemed a perfect fit. TDS T4 Model Generation walks the item tree in TDS, finds all templates and generates classes and interfaces for each of them. It has has a deep collection of field types, so you can work with Reference Fields etc through a clean API. It also ships with a number of classes that mirror the hypothetical interfaces I outlined above. What this all means is that you can generate strongly-typed items directly from within visual studio with tools you’re probably already using anyway!

Combining WebFormsMVP + an abstracted sitecore API

Armed with strongly-typed items, and a wrapper around the most common parts of the Sitecore API, we can now put this kind of code into our Presenter and still write tests for it!

Let’s modify our previous exmaple to pull some data from Sitecore. First we should (re)create the AddUserModel.

public class AddUserModel
{
public IAddUserTemplate Item { get; set; }
}

Then we can modify our Presenter to accept an IItemRetriever as a dependency, and populate the Model:

public class AddUserPresenter : Presenter<IAddUserView>
{
private readonly IUserRepository _repository;
private readonly IItemRetriever _itemRetriever;

public AddUserPresenter(IAddUserView view, IUserRepository repository, IItemRetriever itemRetriever) : base(view)
{
_repository = repository;
_itemRetriever = itemRetriever;

view.OnLoad += OnLoad
view.AddingUser += OnAddingUser;
}

private void OnLoad(object sender, EventArgs e)
{
wiew.Model.Item = _itemRetriever.GetContextItem<IAddUserTemplate>();
}

// etc
}

And in our View we can now output data from the Sitecore item. Note, because this is inside a data bound control (our FormView), we’re using the data bind expression, <%# %>. If this code was not in a data bound control, you would instead use the standard inline expression, <%= %>

<label for="firstName"><%# Model.Item.FirstNameLabel.Render() %></label>

That’s about it. Would be interested in hearing other people’s thoughts.

Sitecore Azure Walkthrough and Gotchas

Walkthrough

With little documentation online on this I thought I’d share something with all the gotchas I spotted in getting it up and running.  Hope it speeds up someone else’s attempts.  The version I have running is Sitecore Azure 3.1.

Environment File

You need to request one from Sitecore as detailed in their documentation.  As this can take a while it’s best to do this up-front.  It took under an hour to get back to me but their docs say to allow for up to 24hrs.  The doc links you to their generic global contact us form which isn’t too helpful. There is also an email address which might get you a response faster  – with details of what to send them in the following post.  But the best way I’ve found is to request an environment file is via the following URL as it captures all the fields you need.  (Note, you cant have dashes in your project name).

Azure Pre-requisites

Unless you’ve installed SQL 2012 you WILL need to install the following:

  • Microsoft SQL Server 2012 Shared Management Objects and
  • Microsoft System CLR Types for Microsoft SQL Server 2012

Microsoft has made this part quite difficult.  Firstly they’ve changed the URL to the download so the Sitecore doc is out of date.  The actual download URL for these resources is here.

Next they’ve chosen to not indicate what the version of the MSI is on its filename.  Therefore you may inadvertently install the X86 one when you need the X64 one.  You can find this out by downloading the MSI and checking the details tab.

2013-10-18_121916

When you install these note that one is dependent on another but you can figure the order out quite easily.

You also need to install MS Azure SDK 2.0Note – this is another gotcha.  I didn’t read the doc carefully enough and went ahead and installed SDK 2.1 but the version of Sitecore Azure I was planning to use Sitecore Azure 3.1.0 rev. 130731, was not compatible with it.  I only ended up finding this when my deploys were failing with the following:

Exception: System.ApplicationException

Message: Can’t find sdk path

Source: Microsoft.ServiceHosting.Tools.MSBuildTasks

at Microsoft.ServiceHosting.Tools.Internal.SDKPaths.GetSDKPath()

Sitecore

Installing Sitecore 7.0 is fairly straightforward. Older versions of Sitecore are compatible with Azure but require some config so I thought I’d go through the path of least resistance.  Note with v7 the dependency on .NET 4.5 so ensure that is installed (Visual Studio 2012 users will have it already; earlier versions will need to install it separately).  You then install the Sitecore Azure module by installing the package found on the SDN using the Package Installer.  (I’m using Sitecore Azure 3.1.0 rev. 130731.zip).

When the install completes you get a shiny new button:

2013-10-18_122019

This opens a tool which allows you to run your deployments.  In the process of which it will ask you to upload your environment file and also install a management certificate.  The latter process is very straightforward and doesn’t merit any observations.

Finally you should be ready to kick off a deploy.

Network Problems

When I tried this on my workstation behind a corporate filewall and a web proxy I ran into innumerable issues.  It was quite clear something was getting blocked because the XAML interface was slow to respond, it hung when trying to do anything

2013-10-18_122058

I was getting errors logged which pointed at network issues

2013-10-18_122140

plus

2013-10-18_122221

SQL Timeouts

The next problem I encountered was deploys failing to complete with SQL stack traces that looked like this:

ManagedPoolThread #16 16:45:00 ERROR Sitecore.Azure: Deploy database error. Retry 6

Exception: System.Data.SqlClient.SqlException

Message: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 – The wait operation timed out.)

Source: .Net SqlClient Data Provider

at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptionsuserOptions, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)

at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)

at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)

at System.Data.SqlClient.SqlConnection.Open()

at Sitecore.Azure.Managers.Pipelines.DeployDatabase.TransferData.TransferDataWorker(Table table, Database targetDatabase)

at Sitecore.Azure.Managers.Pipelines.DeployDatabase.DeployDatabasePipelineProcessor.DoDeploy[T](Func`3 func, Int32 repeat, IEnumerable`1 objects, Database targetDatabase, Action`1 exceptionCallBack)

The advice I got from the helpful team at Sitecore Support was that the defaultSQLtimeout in configuration was probably set too low and so I ended up amending the default value found here:

<setting name=”DefaultSQLTimeout” value=”00:05:00″ />

To 30 mins.  On redeploy I was able to successfully complete a deployment.

Missing DLLs

Subsequent to raising a ticket about it and resolving it, I noticed the missing dlls issue has been blogged about elsewhere:  http://toadcode.blogspot.co.uk/2013/04/sitecore-azure-getting-up-and-running.html

For the sake of completeness and given I am working with a different version of Sitecore than Toad’s Code, I thought I’d add what I had to do.  The missing files are as follows:

  1. System.Web.Mvc.dll 3.0.0.0
  2. System.Web.Helpers.dll 1.0.0.0
  3. System.Web.WebPages.dll 1.0.0.0
  4. System.Web.WebPages.Deployment.dll; 1.0.0.0
  5. System.Web.WebPages.Razor.dll; 1.0.0.0
  6. Microsoft.Web.Infrastructure.dll. 1.0.0.0

At this stage I did not have an instance of visual studio running with my own code and a build pointing at my Sitecore website, nevertheless in order to obtain the correct versions of these (there were multiple version of these on my machine and you need to choose the right ones) I opened visual studio, created a new asp.net project and added these as references because VS does a nice job at clearly specifying which versions you are adding on the right-hand side.  I then manually copied the binaries from this compiled project into my Sitecore instance and redeployed.  Note – you can also RDP onto your already-deployed instances and “hot fix” them.

2013-10-18_122303

When completed I now have a blank instance of a Sitecore delivery instance in the cloud:

2013-10-18_122424