# Partial Updates with HTTP PATCH using ServiceStack.net and the JSON Patch format (RFC 6902)

I have been looking into implementing partial updates using the HTTP PATCH method using ServiceStack.net and the JSON Patch format (RFC 6902)

This is of interest since many updates do not neatly match the PUT method, which often is used for full entity updates (all properties). PATCH is intended to do one or more partial updates. There are a few blogs describing the use cases.

I’ve been happy using ServiceStack the way it was designed – RESTful, simple, using Message Based designs.

I could implement PATCH using my own message format – that is easy to do. Usually it would be the actual DTO properties, plus a list of fields which are actually going to be updated. You wouldn’t update all fields, and you don’t want to only update non-null properties, since sometimes “null” is a valid value for a property (it would be impossible to set a property to null from non-null).

In my opinion, using JSON Patch for the Request body has pros and cons.
Pros:

• is an official RFC
• covers a lot of use cases

Cons:

• very generic, so we lose some of the benefit of strong typing
• doesn’t have a slot for the Id of a resource when calling PATCH /employees/{Id}
• doing this the “JSON Patch way” would be { “op”: “replace”, “path”: “/employees/123/title”, “value”: “Administrative Assistant” } , but that wastes the value of having it on the routing path.

JSON Patch supports a handful of operations: “add”, “remove”, “replace”, “move”, “copy”, “test”. I will focus on the simple “replace” op, since it easily maps to replacing a property on a DTO (or field in a table record).

The canonical example looks like this:

PATCH /my/data HTTP/1.1
Host: example.org
Content-Length: 55
Content-Type: application/json-patch+json
If-Match: "abc123"

[
{ "op": "replace", "path": "/a/b/c", "value": 42 }
]

I’m going to ignore the If-Match: / ETag: headers for now. Those will be useful if you want to tell the server to only apply your changes if the resource still matches your “If-Match” header (no changes in the meantime). “That exercise is left to the reader.”

Let’s say we have a more practical example:

• an Employee class, backed by an [Employee] table, accessed by OrmLite
• an EmployeeService class, implementing the PATCH method
• the Request DTO to the Patch() method aligns to the JSON Patch structure

The Employee class would simply look like this (with routing for basic CRUD):

[Route("/employees", "GET,POST")]
[Route("/employees/{Id}", "GET,PUT")]
public class Employee
{
public long Id { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string Title { get; set; }
public int? CubicleNo { get; set; }
public DateTime StartDate { get; set; }
public float Longitude { get; set; }
public float Latitude { get; set; }
}

Now the shape of JSON Patch replace ops would look like this:

PATCH /employees/123 HTTP/1.1
Host: example.org
Content-Type: application/json

[
{ "op": "replace", "path": "/title", "value": "Junior Developer" },
{ "op": "replace", "path": "/cubicleno", "value": 23 },
{ "op": "replace", "path": "/startdate", "value": "2013-06-02T09:34:29-04:00" }
]

The path is the property name in this case, and the value is what to update to.

And yes, i also know i am sending Content-Type: application/json instead of Content-Type: application/json-patch+json . We’ll have to get into custom content type support later too.

Now, sending a generic data structure as the Request DTO to a specific resource ID doesn’t cleanly map to the ServiceStack style, because:

• each Request DTO should be a unique class and route
• there is not a field in the Request for the ID of the entity

The simple way to map the JSON to a C# class would define an “op” element class, and have a List<T> of them, like so:

public class JsonPatchElement
{
public string op { get; set; } // "add", "remove", "replace", "move", "copy" or "test"
public string path { get; set; }
public string value { get; set; }
}

We create a unique Request DTO so we can route to the Patch() service method.

[Route("/employees/{Id}", "PATCH")]
public class EmployeePatch : List<JsonPatchElement>
{
}

But how do we get the #$%&& Id from the route?? This code throws RequestBindingException! But i can’t change the shape of the PATCH request body from a JSON array []. The answer was staring me in the face: just add it to the DTO class definition, and ServiceStack will map to it. I was forgetting the C# class doesn’t have to be the same shape as the JSON. [Route("/employees/{Id}", "PATCH")] public class EmployeePatch : List<JsonPatchElement> { public long Id { get; set; } } Think of this class as a List<T> with an additional Id property. When the method is called, the JSON Patch array is mapped and the Id is copied from the route {Id}. public object Patch(EmployeePatch dto) { // dto.Id == 123 // dto[0].path == "/title" // dto[0].value == "Joe" // dto[1].path == "/cubicleno" // dto[1].value == "23" The only wrinkle is all the JSON values come in as C# string, even if they are numeric or Date types. At least you will know the strong typing from your C# class, so you know what to convert to. My full Patch() method is below– note the partial update code uses reflection to update properties of the same name, and does primitive type checking for parsing the string values from the request DTO. public object Patch(EmployeePatch dto) { // partial updates // get from persistent data store by id from routing path var emp = Repository.GetById(dto.Id); if (emp != null) { // read from request dto properties var properties = emp.GetType().GetProperties(); // update values which are specified to update only foreach (var op in dto) { string fieldName = op.path.Replace("/", "").ToLower(); // assume leading /slash only for example // patch field is in type if (properties.ToList().Where(x => x.Name.ToLower() == fieldName).Count() > 0) { var persistentProperty = properties.ToList().Where(x => x.Name.ToLower() == fieldName).First(); // update property on persistent object // i'm sure this can be improved, but you get the idea... if (persistentProperty.PropertyType == typeof(string)) { persistentProperty.SetValue(emp, op.value, null); } else if (persistentProperty.PropertyType == typeof(int)) { int valInt = 0; if (Int32.TryParse(op.value, out valInt)) { persistentProperty.SetValue(emp, valInt, null); } } else if (persistentProperty.PropertyType == typeof(int?)) { int valInt = 0; if (op.value == null) { persistentProperty.SetValue(emp, null, null); } else if (Int32.TryParse(op.value, out valInt)) { persistentProperty.SetValue(emp, valInt, null); } } else if (persistentProperty.PropertyType == typeof(DateTime)) { DateTime valDt = default(DateTime); if (DateTime.TryParse(op.value, out valDt)) { persistentProperty.SetValue(emp, valDt, null); } } } } // update Repository.Store(emp); } // return HTTP Code and Location: header for the new resource // 204 No Content; The request was processed successfully, but no response body is needed. return new HttpResult() { StatusCode = HttpStatusCode.NoContent, Location = base.Request.AbsoluteUri, Headers = { // allow jquery ajax in firefox to read the Location header - CORS { "Access-Control-Expose-Headers", "Location" }, } }; } For an example of calling this from the strongly-typed ServiceStack rest client, my integration test looks like this: [Fact] public void Test_PATCH_PASS() { var restClient = new JsonServiceClient(serviceUrl); // dummy data var newemp1 = new Employee() { Id = 123, Name = "Kimo", StartDate = new DateTime(2015, 7, 2), CubicleNo = 4234, Email = "test1@example.com", }; restClient.Post<object>("/employees", newemp1); var emps = restClient.Get<List<Employee>>("/employees"); var emp = emps.First(); var empPatch = new Operations.EmployeePatch(); empPatch.Add(new Operations.JsonPatchElement() { op = "replace", path = "/title", value = "Kahuna Laau Lapaau", }); empPatch.Add(new Operations.JsonPatchElement() { op = "replace", path = "/cubicleno", value = "32", }); restClient.Patch<object>(string.Format("/employees/{0}", emp.Id), empPatch); var empAfterPatch = restClient.Get<Employee>(string.Format("/employees/{0}", emp.Id)); Assert.NotNull(empAfterPatch); // patched Assert.Equal("Kahuna Laau Lapaau", empAfterPatch.Title); Assert.Equal("32", empAfterPatch.CubicleNo.ToString()); // unpatched Assert.Equal("test1@example.com", empAfterPatch.Email); } I am uploading this code to github a full working Visual Studio 2013 project, including xUnit.net tests. I hope this has been useful to demonstrate the flexibility of using ServiceStack and C# to implement the HTTP PATCH method using JSON Patch (RFC 6902) over the wire. Update: i refactored the code so that any object can have it’s properties “patched” from a JsonPatchRequest DTO by using an extension method populateFromJsonPatch(). public object Patch(EmployeePatch dto) { // partial updates // get from persistent data store by id from routing path var emp = Repository.GetById(dto.Id); if (emp != null) { // update values which are specified to update only emp.populateFromJsonPatch(dto); // update Repository.Store(emp); # Using Handlebars.js templates as precompiled JS files I’ve previously used Handlebars templates in projects, but only in the simple ways– i defined a <script> block as inline html templates, and used in my js code. However, i have a project where i need all the code, including html templates, as js files. Luckily Handlebars can do this, but we’ll need to set up the proper node-based build environment to do so. • node.js • gulp task runner • bower for flat package management • handlebars for templates The templates will get “precompiled” by gulp, resulting in a pure js file to include in the html page. Then we’ll be able to code in HTML, but deploy as JS. First i create a new empty ASP.NET Web project in Visual Studio. I’ll call it: HandlebarsTest. Note that almost none of this is Visual Studio-specific, so 95% is applicable to any other development environment. Next, i will set up Gulp and Bower, similar to how i did it in my 2 prior posts: I will create the gulpfile.js like so (we’ll add it it): var gulp = require('gulp'); gulp.task('default', ['scripts'], function() { }); gulp.task('scripts', function() { }); Open the node command prompt, and change to the new directory cd HandlebarsTest\HandlebarsTest npm init npm install -g gulp npm install gulp --save-dev npm install gulp-uglify --save-dev npm install gulp-concat --save-dev npm install gulp-wrap --save-dev npm install gulp-declare --save-dev I will create the .bowerrc file like so: { "directory": "js/lib" } OK, now for some handlebars stuff. One thing to understand is we need to do handlebars stuff at build/compile time AND at runtime. That means: • the precompilation will be run by gulp during build time (install gulp-handlebars using npm), and • the web browser will execute the templates with the handlebars-runtime library (install to the project using Bower) npm install gulp-handlebars --global npm install gulp-handlebars --save-dev  ## Bower (client-side) packages I will use Bower to install the client side libs: handlebars, jquery, etc. First, create the bower.json file. bower init Next, start installing! bower install jquery bower install handlebars Those files get installed to /js/lib/* , per my .bowerrc file. Now we can reference them in scripts, or use them for js bundles. ## HTML, Javascript, and Handlebars templates together. My use-case is to: 1. Have a static HTML page 2. Include a script tag which loads a single JS file 3. The single JS file will load/contain the libraries AND the main execution code 4. the main execution code will render a DIV element which renders a Handlebars template with an object. HTML page just includes a single JS , which will be built: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Handlebars Test</title> <script type="text/javascript"> (function() { function async_load(){ var cb = 'cb=' +(new Date).getTime(); var rmist = document.createElement('script'); rmist.type = 'text/javascript'; rmist.async = true; rmist.src = '../js/dist/bundle.js?' + cb; var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(rmist, x); } if (window.attachEvent) window.attachEvent('onload', async_load); else window.addEventListener('load', async_load, false); }()); </script> </head> <body> <h1>Handlebars Test</h1> <p id="main-content"> There will be a dynamic element added after this paragraph. </p> <p id="dynamic-content"></p> </body> </html> Handlebars templates will be in /templates/*.hbs . Here’s an example, i’m calling /templates/hellotemplate.hbs: <div class="hello" style="border: 1px solid red;"> <h1>{{title}}</h1> <div class="body"> Hello, {{name}}! I'm a template. </div> </div> Javascript will be in the /js/app/app.js and the other libraries Here, i’m taking direction from https://github.com/wycats/handlebars.js#precompiling-templates gulp-handlebars handles the precompilation. We will run the ‘gulp’ build process to precompile hbs templates to js later. The app.js code will need to render the precompiled template with the data object, and add to the DOM somehow (using jQuery in this case). "use strict"; var data = { title: 'This Form', name: 'Joey' }; var html = MyApp.templates.hellotemplate(data); // console.log(html);$(document).ready(function () {
$('#message-area').html("jQuery changed this."); }); Also, the index.html file will reference/run the app.js and jquery scripts. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Gulp Web Test</title> <script src="node_modules/jquery/dist/jquery.min.js"></script> <script src="scripts/app.js"></script> </head> <body> <h1>Gulp Web Test</h1> <div id="message-area">Static DIV content.</div> </body> </html> When you execute this traditional, static version of the app, it will run as expected: ### Starting to put it together Now we configure the gulp file to run our 3 tasks together: 1. minify our JS scripts 2. concat them into one JS file We need to add this to the gulpfile.js: var gulp = require('gulp'); var uglify = require('gulp-uglify'); var concat = require('gulp-concat'); gulp.task('default', ['scripts'], function () { }); gulp.task('scripts', function () { return gulp.src(['node_modules/jquery/dist/jquery.js', 'scripts/**/*.js']) .pipe(concat('main.js')) .pipe(gulp.dest('dist/js')) .pipe(uglify()) .pipe(gulp.dest('dist/js')); }); Note i’ve created a new ‘scripts’ task, which is added as a dependent task to the ‘default’ task. The ‘scripts’ task is using the jquery.js file and all the *.js files in the /scripts/ directory as sources. They all go thru concat(), output to main.js , in the dist/js/ directory. They then go thru uglify(). Next,we run the ‘gulp’ command on the node command line. After a couple back and forth errors and corrections, we get this: In Solution Explorer, you can refresh and now see the /dist/js/main.js file which was created. It should contain our custom js code as well as the whole jQuery. Then we can update the HTML reference to the new output bundle.js file, and see if it runs the same way. Delete the script tags for jquery.js and app.js, and add a single one for main.js <script src="dist/js/main.js"></script> When you run the same index.html in the browser, you should get the same “jQuery changed this.” output, even though the only js file is ‘main.js’. The output main.js is only 83K. I’m sure it could get smaller if we use gzip, etc. But it proves the concept works. It should be very easy to add other JS modules as needed. The downside is installing this stuff to the project added 2,000 files under /node_modules/, adding about 12MB. ## Visual Studio and MSBuild Integration I did find some info on how to run Gulp and Grunt from withing VS as a post-build command, and hopefully in MSBuild as well: For Gulp, we can just add a post-build step in the project - • right-click the project -> Properties… • click “Build Events”… • to the “Post-build event command-line:” add the following: cd$(ProjectDir)
gulp


That will run the ‘gulp’ command via VS when you ‘build’, instead of having to use the command line. Much more convenient. You can delete the main.js file, then ‘build’ again – it will regenerate. Reference: Running Grunt from Visual Studio post build event command line
http://stackoverflow.com/questions/17256934/running-grunt-from-visual-studio-post-build-event-command-line .

Possibly much more full-featured and useful is the “Task Runner Explorer” VSIX extension. This is basically “real” tooling support in VS. I haven’t tried it yet, but i expect to try it.

Code for this post can be found here: https://github.com/nohea/Enehana.CodeSamples/tree/master/GulpWebTest

Update: I installed the Task Runner Explorer per the article above. It does work to view/run targets in the Gulpfile.js, so you don’t have to run on the command line, or have to build to execute the tasks.

Update 2: i have a follow-up post: Using Bower for JS package management instead of NPM

# ServiceStack CSV serializer with custom filenames

One of ServiceStack’s benefits is having one service method endpoint output to all supported serializers. The exact same code will output formats for JSON, XML, CSV, and even HTML. If you are motivated, you are also free to add your own.

Now in the case of CSV output, the web browser handling the download will prompt the user to save the text/csv stream as a file. The ‘File Save’ dialog will fill in the name of the file, if it is included in the HTTP response this way:

Note the filename is “Todos.csv”, because the request operation name is “Todos”. (i’m using the example service code).

There could be many cases where you would like to have much more fine-grained control of the default filename. However, you don’t want to pollute the Response DTO, since that would ruin the generic “any format” nature of the framework. You’ll probably also want to be able to have different filename-creation logic per-service, since you’ll often have many services in one application.

In my attempt to get to the bottom of this,

• I create a new blank ASP.NET project. The version i want is the 3.9.* version, since i’m not up on the v4 stuff.
• Using this site, i can identify the correct version of the NuGet package, and install the correct ones: https://www.nuget.org/packages/ServiceStack.Host.AspNet/3.9.71
• Then i install from the console.
PM> Install-Package ServiceStack.Host.AspNet -Version 3.9.71
• I see all my references are 3.9.71
• My web.config has the ServiceSTack handlers installed, and my project has the App_Start\AppHost.cs

The demo project is the ToDo list. I’ll use it to test the CSV output. First, add a few items:

Then try to get the service ‘raw’:

http://localhost:49171/todos

You will see the generic ServiceStack output page:

Next, click the ‘csv’ link on the top right, in order to get the service with a ‘text/csv’ format. You will get the prompt dialog, as shown at the top of this post, with the ‘Todos.csv’ filename.

if you inspect the HTTP traffic in Fiddler, the request is:

GET /todos?format=csv HTTP/1.1

the response of looks like this:

HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/csv
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
Content-Disposition: attachment;filename=Todos.csv
X-Powered-By: ServiceStack/3.971 Win32NT/.NET
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-8?B?QzpcVXNlcnNccmF1bGdcRG9jdW1lbnRzXGVuZWhhbmFcY29kZVxFbmVoYW5hLkNvZGVTYW1wbGVzXFNzQ3N2RmlsZW5hbWVcdG9kb3M=?=
X-Powered-By: ASP.NET
Date: Tue, 11 Mar 2014 01:43:52 GMT
Content-Length: 88
Id,Content,Order,Done
2,make lunch,2,False
3,do launtry,3,False

The Content-Disposition: header defines the default filename of the save dialog box.

So how is this set? ServiceStack’s CSV serializer code sets is explicitly.

The best way i’ve discovered to do this is to plug in your own alternative CsvFormat plugin. If you view the source code, you’ll see where it sets the Content-Disposition: header in the HTTP Response.

    //Add a response filter to add a 'Content-Disposition' header so browsers treat it natively as a .csv file
{
if (req.ResponseContentType == ContentType.Csv)
{
string.Format("attachment;filename={0}.csv", req.OperationName));
}
});

The docs for ServiceStack’s CSV Format are clear on it:

https://github.com/ServiceStackV3/ServiceStackV3/wiki/ServiceStack-CSV-Format

A ContentTypeFilter is registered for ‘text/csv’, and it is implemented by ServiceStack.Text.CsvSerializer.

Additionally, a ResponseFilter is added, which adds a Response header. Note the Content-Disposition: header is explicitly using the Request ‘OperationName’ as the filename. Normally this will be the Request DTO, which in this case is named ‘Todos’.

res.AddHeader(HttpHeaders.ContentDisposition,
string.Format("attachment;filename={0}.csv", req.OperationName));

So, what if we want to replace the default registration with different logic for setting the filename? We won’t need to change the registered serializer (still want the default CSV), but we should remove the ResponseFilter and add it in a slightly different way.

If you want to remove both, you can remove the Feature.Csv. However, in this case i just want to change the filter. I had trouble altering the response filter directly, so instead i created my own ‘CsvFilenameFormat’, which looks almost exactly like ‘CsvFormat’. The difference is that i try to get a custom filename from the service code, by looking in Request.Items Dictionary<string, object>.

The differing code in CsvFilenameFormat.Register():

    //Add a response filter to add a 'Content-Disposition' header so browsers treat it natively as a .csv file
{
if (req.ResponseContentType == ContentType.Csv)
{
string csvFilename = req.OperationName;

// look for custom csv-filename set from Service code
if( req.GetItemStringValue("csv-filename") != default(string) )
{
csvFilename= req.GetItemStringValue("csv-filename");
}

}
});

So if the service code sets a custom value, it will be used by the text/csv response for the filename. Otherwise, use the default.

In the service:

        public object Get(Todos request)
{
// set custom filename logic here, to be read later in the response filter on text/csv response
this.Request.SetItem("csv-filename", "customfilename");

So the mechanism is set up, all we need to do is properly prevent the default Csv ResponseFilter and use our own instead.

In AppHost Configure(), add a line to remove the Csv plugin, and one to install our replacement:

            // clear
this.Plugins.RemoveAll(x => x is CsvFormat);

// install custom CSV
Plugins.Add(new CsvFilenameFormat());

At this point, everything is in place, and we can re-run our web app:

That’s the show. Thanks.

# Customizing IAuthProvider for ServiceStack.net – Step by Step

## Introduction

Recently, i started developing my first ServiceStack.net web service. As part of it, i found a need to add authentication to the service. Since my web service is connecting to a legacy application with its own custom user accounts, authentication, and authorization (roles), i decided to use the ServiceStack Auth model, and implement a custom IAuthProvider.

Oh yeah, the target audience for this post:

• C# / .NET / Mono web developer who is getting started learning how to build a RESTful web api using ServiceStack.net framework
• Wants to add the web API to an existing application with its own proprietary authentication/authorization logic

I tried to dive in and implement in my app, but i got something wrong with the routing to the /auth/{provider} , so i decided to take a step back and do the simplest thing possible, just so i understood the whole process.That’s what i’m going to do today.

I’m using Visual Studio 2012 Professional, but you could also use VS 2010, probably VS 2012 Express as well (or MonoDevelop, that’s another story i haven’t tried).

The simplest thing possible in my mind:

This is not an example of TDD-style development — more of a technology exploration.

OK, let’s get started.

## Creating HelloWorld

I’m not going to repeat what’s already in the standard ServiceStack.net docs, but the summary is:

• create an “ASP.NET Empty Web Application” (calling mine SSHelloWorldAuth)
• pull in ServiceStack assemblies via NuGet (not my usual practice, but its easy). In fact, i’m using the “Starter ASP.NET Website Template – ServiceStack”. That will install all the assemblies and create references, and also update Global.asa
• Create the Hello , HelloRequest, HelloResponse, and HelloService classes, just like the sample. Scratch that – it is already defined in the template at App_Start/WebServiceExamples.cs
• Run the app locally. You will see the “ToDo” app loaded and working in the default.htm. Also, you can test the Hello function at http://localhost:65227/hello (your port number may vary)

## Adding a built-in authentication provider

OK that was the easy part. Now we’re going to add the [Authenticate] attribute to the HelloService class.

[Authenticate]
public class HelloService : Service
{  ...

This will prevent the service from executing unless the session is authenticated already. In this case, it will fail, since nothing is set up.

## Enabling Authentication

Now looking in App_Start/AppHost.cs , i found an interesting section:

		/* Uncomment to enable ServiceStack Authentication and CustomUserSession
private void ConfigureAuth(Funq.Container container)
{
var appSettings = new AppSettings();

//Default route: /auth/{provider}
Plugins.Add(new AuthFeature(this, () => new CustomUserSession(),
new IAuthProvider[] {
new CredentialsAuthProvider(appSettings),
new BasicAuthProvider(appSettings),
}));

//Default route: /register

//Requires ConnectionString configured in Web.Config
var connectionString = ConfigurationManager.ConnectionStrings["AppDb"].ConnectionString;
container.Register<IDbConnectionFactory>(c =>
new OrmLiteConnectionFactory(connectionString, SqlServerDialect.Provider));

container.Register<IUserAuthRepository>(c =>
new OrmLiteAuthRepository(c.Resolve<IDbConnectionFactory>()));

var authRepo = (OrmLiteAuthRepository)container.Resolve<IUserAuthRepository>();
authRepo.CreateMissingTables();
}
*/

Let’s use it. But i want to just enable CredentialsAuthProvider, since that is a forms-based username/password authentication, (the closest to what i want to do customized).

A few notes on the code block above:

The “Plugins.Add(new AuthFeature(() ” stuff was documented.

“Plugins.Add(new RegistrationFeature());” was new to me, but now i see it is to add the /register route and behavior
For this test, i will go along with using the OrmLite for the authentication tables. In order to do that,

• i’m using a new connection string “SSHelloWorldAuth”,
• adding it to Web.config: <connectionStrings><add name=”SSHelloWorldAuth” connectionString=”Data Source=.\SQLEXPRESS;Initial Catalog=SSHelloWorldAuth;Integrated Security=SSPI;” providerName=”System.Data.SqlClient” /></connectionStrings>
• creating a new SQLEXPRESS database locally, called: SSHelloWorldAuth

Finally, we’ll have to add/enable the line to ConfigureAuth(container) , which will initialize the authentication system.

Now we’ll try running the app again: F5 and go to http://localhost:65227/hello in the browser again. I get a new problem:

In a way, it’s good, because the [Authenticate] attribute on the HelloService class worked – the resource was found, but sent a redirect to /login . However, no handler is set up for /login.

Separately, i checked if the OrmLite db got initialized with authRepo.CreateMissingTables(); , and it seems it did (2 tables created).

This is where i got hung up on my initial try to get it working, so i’m especially determined to get this working.

The only example of a /login implementation i found in the ServiceStack source code tests. It seems like /login would be for a user to enter in a form. It seems if you are a script (javascript or web api client), you would authenticate at the /auth/{provider} URI.

That’s when i thought – is the /auth/* service set up properly? Let’s try going to http://localhost:65227/auth/credentials

So the good news is that is is set up. Why don’t we try to authenticate against /auth/credentials ?

Well, first i should create a valid username/password combination. I can’t just insert into the db, since the password must be one-way hashed. So i’m going to use the provider itself to do that.

I copied a CreateUser() function in the ServiceStack unit tests, and will run in my app’s startup. I modified slightly to pass in the OrmLiteAuthRepository, and call it right after initializing the authRepo.

CreateUser(authRepo, 1, "testuser", null, "Test2#4%");

Run the app with F5 again, and then check the database: select * from userauth — we now have one row with username and hashed password. Suitable for testing. (don’t forget to disable CreateUser() now).

## Authenticating with GET

I would never do this on my “real” application. At minimum, i would only expose a POST method. But instead of writing some javascript, i’m going to try the web browser to submit credentials and try to authenticate.

First, i’m going to try and use a wrong password:

… i get the same “Invalid UserName or Password” error, which is good.

Success! This means my user id has a validated ServiceStack session on the server, and is associated with my web browser’s ss-id cookie.

I can now go to the /hello service on the same browser session, and it should work:

Awesome. So we’ve figured out the /auth/credentials before the /hello service. Just for kicks, i stopped running the app in Visual Studio and terminated my local IIS Express web server instance, in order to try a new session. When i ran the project again and went to /hello , it failed as expected (which we want). Only by authenticating first, do we access the resource.

## IAuthProvider vs IUserAuthRepository

Note that i started this saying i wanted to implement my own IAuthProvider. However, ServiceStack also separately abstracts the IUserAuthRepository, which seems to be independently pluggable. Think of it this way:

• IAuthProvider is the authentication service code backing the HTTP REST API for authentication
• IUserAuthRepository is the provider’s .NET interface for accessing the underlying user/role data store (all operations)

Since my initial goal was to use username/password login with my own custom/legacy authentication rules, it seems more appropriate to use subclass CredentialsAuthProvider (creating my own AcmeCredentialsAuthProvider).

I do not expect to have to create my own IUserAuthRepository at this time– but it would be useful if i had to expose my custom datastore to be used by any IAuthProvider. If you are only supporting one provider, you can put the custom code into the provider’s TryAuthenticate() and OnAuthenticated() methods. With a legacy system, you probably already have tools to manage user accounts and roles, so you’re not likely to need to re-implement all the IUserAuthRepository methods. However, if you need to implement Roles, a custom implementation of IUserAuthRepository may be in order (to be revisited).

This is going to be almost directly from the Authentication and Authorization wiki docs.

• Create a new class, AcmeCredentialsAuthProvider.cs
• subclass CredentialsAuthProvider
• override OnAuthenticated(), adding any additional data for the user to the session for use by the application
    public class AcmeCredentialsAuthProvider : CredentialsAuthProvider
{
{
//Add here your custom auth logic (database calls etc)
//Return true if credentials are valid, otherwise false
{
return true;
}
else
{
return false;
}
}

public override void OnAuthenticated(IServiceBase authService, IAuthSession session, IOAuthTokens tokens, Dictionary<string, string> authInfo)
{
//Fill the IAuthSession with data which you want to retrieve in the app eg:
session.FirstName = "some_firstname_from_db";
//...

//Important: You need to save the session!
authService.SaveSession(session, SessionExpiry);
}
}

As you can see, i did it in a trivially stupid way, but any custom logic of your own will do.

Finally, we change AppHost.cs ConfigureAuth() to load our provider instead of the default.

			Plugins.Add(new AuthFeature(() => new CustomUserSession(),
new IAuthProvider[] {
new AcmeCredentialsAuthProvider(appSettings),
}));

Run the app again, you should get the same results as before passing the correct or invalid username/password. Except in this case, you can set a breakpoint and verify your AcmeCredentialsAuthProvider code is running.

So at the end of this i’m happy:

• I established how to create a ServiceStack service with a working custom username/password authentication
• I learned some things from the ServiceStack Nuget template which was in addition to the docs
• I understand better where it is sufficient to only override CredentialsAuthProvider for IAuthProvider , and where it may be necessary to implement a custom IUserAuthRepository (probably to implement custom Roles and/or Permissions)

Thanks for your interest. If you are interested in the code/project file created with this post, i’ve pushed it to GitHub.

# Continuous Deployment for ASP.NET using Git, MSBuild, MSDeploy, and TeamCity

Continuous Deployment goes a step further than Continuous Integration, but based on the same principle: the more painless the deployment process is, the more often you will do it, leading to faster development in smaller, manageable chunks.

As a C#/ASP.NET developer deploying to an IIS server, the go-to tool from Microsoft is MSDeploy (aka WebDeploy). This article primarily discusses steps in Visual Studio 2010, Web Deploy 2.0, and TeamCity 7.1. I have read numerous articles which explain using Git w/TeamCity and MSBuild, but not so much specifically with MSDeploy.

My ideal setup is to have the CI server automate all the steps which would otherwise be done manually by the developer. I am using the TeamCity 7 continuous integration server. You can mix/match your own tools, but the basic steps would be the same:

• Edit your VS web project “Package/Publish” settings
• New code changes are committed to source control branch (in my case, Git)
• TeamCity build configuration triggers builds from VCS repository (Git) when new commits are pushed up
• Build step: MSBuild builds code from .csproj, .sln or .msbuild xml file
• Build step: Run unit tests  (xUnit.net or other)
• Build step: MSBuild packages code to ZIP file
• Build step: MSDeploy deploys ZIP package to remote server (development or production)

I’ll go thru the steps in detail (except test running, which is important, but a separate focus).

## Step 1: edit the Visual Studio project properties

When deploying, there are some important settings in the project which affect deployment. To see them, in your solution explorer, right-click (project name) -> Properties… , tab “Package/Publish Web” …

• Configuration: Active (Debug) – this means the ‘Debug’ config is active in VS, and you are editing it. The ‘Debug’ and ‘Release’ configurations both can be selected and independently edited.
• Web Deployment Package Settings – check “Create deployment package as zip file”. We want the ZIP file so it can be deployed separately later.
• IIS Web Site/application name – This must match the IIS Web site entry on the target server. Note i use “MyWebApp/” with no app name after the path. That is how it looks on the web server config.

Save it with your project, and make sure your changes are checked into Git (pushed to origin/master). Those settings will be pulled from version control when the CI server runs the build steps.

## Step 2: add a Build Step in the TeamCity config

I edit the Build Steps, and add a second build step, to build the MyWebApp.sln directly, using msbuild.

MSBuild
Build file path: MyWebApp/MyWebApp.sln
Targets: Build
Command line parameters: /verbosity:diagnostic

## Step 3: fix build error by installing Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package

My first build after adding the web project did fail. Here’s the error:

C:\TeamCity\buildAgent\work\be5c9bc707460fdf\MyWebApp\MyWebApp\MyWebApp.csproj(727, 3): error MSB4019: The imported project ”C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets” was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

I did a little research, and found this link:

http://stackoverflow.com/questions/3980909/microsoft-webapplication-targets-was-not-found-on-the-build-server-whats-your

Basically, either we need to install VS on the build server, manually copy files, or install the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package. I’m going to try door #3.

## Step 4: Install the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package

After installing the Microsoft Visual Studio 2010 Shell (Integrated) Redistributable Package on the build server, i go back in TeamCity and click the [Run...] button, which will force a new build. I have to do this because nothing changed in the Git source repository (i only installed new stuff on the server), so that won’t trigger a build.

Luckily, that satisfied the Web App build– success!

Looking in the build log, i do see it built MyWebApp.sln and MyWebApp.dll.

So build is good. Still no deployment to a server yet.

## Step 5: Install the MS Web Deployment tool

FYI, i’m following some hints from:

I get the Web Deployment Tool here and install. After reboot, the TeamCity login has a 404 error. Turns out Web Deploy has a service which listens on port 80, but so does TeamCity Tomcat server. For short term, i stop the Web Deploy web service in control panel, and start the TeamCity web service. The purpose of the Web Deployment Agent Service is to accept requests to that server from other servers. We don’t need that, because the TeamCity server will act as a client, and deploy out to other web servers.

The Web Deployment Tool also has to be installed on the target web server. I’m not going to go too far into detail here, but you have to configure the service to listen as well, so when you run the deployment command, it accepts it and installs on the server. For the development server, i set up a new account named ‘webdeploy’ with permission to install. For production web servers, i’m not enabling it yet, but i did install Web Deploy so i can do a manual run on the server using Remote Desktop (will explain later).

## Step 6: Create a MSBuild command to package the Web project

http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_24.html

In that post, the example “build-it-all” command is this:

msbuild Web.csproj
/P:Configuration=Deploy-Dev
/P:DeployOnBuild=True
/P:DeployTarget=MSDeployPublish
/P:MsDeployServiceUrl=https://AutoDeploy:8172/MsDeploy.axd
/P:AllowUntrustedCertificate=True
/P:MSDeployPublishMethod=WMSvc
/P:CreatePackageOnPublish=True
/P:Password=Passw0rd

This is a package and deploy in one step. However, i opted for a different path – separate steps for packaging and deployment. This will allow cases for building a Release package but manually deploying it.

So in our case, we’ll need to do the following:

• Try using the “Debug” config. That will use our dev server web.config settings. XML transformations in Web.Debug.config get applied to Web.config during the MSBuild packaging (just as if you ran ‘Publish’ in Visual Studio).

This is the msbuild package command:

"C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe"
MyWebApp/MyWebApp/MyWebApp.csproj
/T:Package
/P:Configuration=Debug;PackageLocation="C:\Build\MyWebApp.Debug.zip"

Let me explain the command parts:

• MyWebApp.csproj : path to VS project file to build. There are important options in there which get set from the project Properties tabs.
• /T:Package : create a ZIP package
• /P:Configuration=Debug;PackageLocation=*** : run the Debug configuration. This is the same as Build in Visual Studio with the ‘Debug’ setting selected. The ‘Package Location’ is what it created. We will reference the package file later in the deployment command.

I tested this command running on my local PC first. When it was working, i ran the same on the CI server via Remote Desktop (for me, it’s a remote Windows 7 instance).

## Step 7: Create a Web Deploy command to deploy the project

• MsDeployServiceUrl – we’ll have to configure the development web server with Web Deploy service.
• Set up user account to connect as (deployuser)
• Have a complete working MSbuild.exe command which works on the command line
• Put the MSBuild command into a new “Deploy” step in TeamCity

After a lot of testing, i got a good command, which is here:

"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy.exe" -verb:sync
-source:package="C:\Build\MyWebApp.Debug.zip"
-allowUntrusted=true

This command is also worth explaining in detail:

• -verb:sync : makes the web site sync from the source to the destination
• -source:package=”C:\Build\MyWebApp.Debug.zip” : source is an MSBuild zip file package
• -dest:auto,wmsvc=devserver : use the settings in the package file to deploy to the server. The user account is an OS-level account with permission (i tried IIS users, but didn’t get it working). The hostname is specified, but not the IIS web site name (which is previously specified in the MSBuild project file in the project properties).

After deployment, i checked the IIS web server files, to make sure they had the latest DLLs and web.config file.

## Step 8: Package and Deploy from the TeamCity build steps

Since we now have 2 good commands, we have to add them to the build steps:

### MSBuild – Package step

Note – there is a special TeamCity MSBuild option, but i went with the command-line runner, just because i already had it set.

### MSDeploy – Deploy step

In this case, i had to use the command-line runner, since there is no MSDeploy option.

When you run the build with these steps, if they succeed, we finally have automatic deployment directly from git!

You can review the logs in TeamCity interface after a build/deployment, to verify everything is as expected. If there are errors, those are also in the logs.

Now every time new code gets merged and pushed to git origin/master branch, it will automatically build and deploy the the development server. Another benefit is that the installed .NET assemblies will have version numbers which match the TeamCity build number, is you use the AssemblyInfo.cs patcher feature.

It will dramatically reduce the time needed to deploy to development – just check in your code, and it will build/deploy in a few minutes.

# ASP.NET MVC Custom Model Binder – Safe Updates for Unspecified Fields

Model Binders are one of the ASP.NET MVC framework’s celebrated features.

The typical way web apps work with a form POST is that the forms key/value pairs are iterated through and processed. In MVC, this works in the Action method’s FormCollection.

        [HttpPost]
public ActionResult Edit(int id, FormCollection collection)

You create your data object and have a line per field.

            dataObject.First_name = collection["first_name"].ToString();
dataObject.Age = (int)collection["age"];

This gets a little tedious, especially when you have to check values for null or other invalid values.

MVC Model Binders do some “magic” to handle the details of mapping your HTTP POST to an object. You specify the typed parameter in the ActionResult method signature…

        [HttpPost]
public ActionResult Edit(int id, MyCompany.POCO.MyModel model)

… and the framework handles the mapping to the object for you.

The good part: you just saved a lot of code, which is good for efficiency and for supporting/debugging.

The bad part: what happens when we edit/update an object and the form does not include all the fields? We just overwrote the value to the default .NET value and saved to the db.

For example, if the model record had a property called [phone_number], and this MVC form did not have it. Maybe the form had to hide some values from update, or else the data model changed and added a field. In an Edit/update, the steps would be:

1. creates the object from the class,
2. copy the values from the form
3. save/update to the db

… we never actually grab the current values of [phone_number], and we just set it to the .NET default value for the string type. Lost some real data. Not good.

## ActionResult method and Model Binder steps

What’s actually happening:

• framework looks at the parameter type and executes the registered IModelBinder for it. If there is none, it uses DefaultModelBinder

DefaultModelBinder will do the following: (source here)

• create a new instance of the model – default values , i.e. default(MyModel)
• read the form POST collection from HttpRequestBase
• copy all the matching fields from the Request collection to the model properties
• run it thru the MVC Validator, if any
• return it to the controller ActionResult method for further action

## Writing code in the Action method to fix the problem

My first step to deal with the issue was to fall back to the FormCollection model binder and hand-code the fix. It looks something like this:

        [HttpPost]
public ActionResult Edit(int id, MyCompany.POCO.MyModel model, FormCollection collection)
{
// update
if (!ModelState.IsValid)
{
return View("Edit", model);
}

var poco = modelRepository.GetByID(id);

// map form collection to POCO
// * IMPORTANT - we only want to update entity properties which have been
// passed in on the Form POST.
// Otherwise, we could be setting fields = default when they have real data in db.
foreach (string key in collection)
{
// key = "Id", "Name", etc.
// use reflection to set the POCO property from the FormCollection
System.Reflection.PropertyInfo propertyInfo = poco.GetType().GetProperty(key);
if (propertyInfo != null)
{
// poco has the form field as a property
// convert from string to actual type
propertyInfo.SetValue(poco, Convert.ChangeType(collection[key], propertyInfo.PropertyType), null);
// InvalidCastException if failed.
}

}

modelRepository.Save(poco);

return RedirectToAction("Index");
}

In this example, modelRepository could be using NHibernate, EF, or stored procs under the hood, but it could be any data source. We loop thru each form post key and try to find a matching property on the model (using reflection). If it matches, convert the string value from the form collection and set it as the value for that propery (also using reflection).

This works and is good, until you realize you have to insert it into every Action method. We could also go traditional, and just stick it in a function call. But we want to leverage the MVC convention-over-configuration philosophy. So now we’re going to try wrapping it in a custom model binder class.

## Creating a Custom Model Binder to fix the problem

To avoid the “unspecified field” problem, we want a model binder to actually do the following on Edit:

• Get() the model from the repository by id to create a new instance of the model
• Update the fields of the persisted model which match from the FormCollection
• run it thru the MVC Validator, if any
• return it to the controller ActionResult method for further action (like Save() )

I am going to define a generic class which is good for any of my POCO types, and inherit from DefaultModelBinder:

    public class PocoModelBinder<TPoco> : DefaultModelBinder
{
MyCompany.Repository.IPocoRepository<TPoco> ModelRepository;

public PocoModelBinder(MyCompany.Repository.IPocoRepository<TPoco> modelRepository)
{
this.ModelRepository = modelRepository;
}

Note, i also inject my Repository (i use IoC), so that i can retrieve the object before update.

DefaultModelBinder has the methods CreateModel() and BindModel(), and we’re going to go with that.

        public object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
// http://stackoverflow.com/questions/752/get-a-new-object-instance-from-a-type-in-c
TPoco poco = (TPoco)typeof(TPoco).GetConstructor(new Type[] { }).Invoke(new object[] { });

// this is from the Route url: ~/{controller}/{action}/{id}
if (controllerContext.RouteData.Values["action"].ToString() == "Edit")
{
// for Edit(), get from Repository/database
string id = controllerContext.RouteData.Values["id"].ToString();
poco = this.ModelRepository.GetByID(Int32.Parse(id));
}
else
{
// call default CreateModel() -- for the Create method
poco = (TPoco)base.CreateModel(controllerContext, bindingContext, poco.GetType());
}

return poco;
}

As you can see, with CreateModel(), if it is an Edit call, we retrieve the model object by the id specified in the URL. This is already parsed out in the RouteData collection. If it is not an Edit, we just call the base class CreateModel(). For example, a Create() call may also use the same ModelBinder.

Now, in the BindModel() method, this is where we move our logic to iterate thru the Form key/value pairs and update the POCO. But in this version, we only update fields in the form, and leave other properties alone:

        public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
object model = this.CreateModel(controllerContext, bindingContext);

// map form collection to POCO
// * IMPORTANT - we only want to update entity properties which have been
// passed in on the Form POST.
// Otherwise, we could be setting fields = default when they have real data in db.
foreach (string key in controllerContext.HttpContext.Request.Form.Keys )
{
// key = "Pub_id", "Name", etc.
// use reflection to set the POCO property from the FormCollection
// http://stackoverflow.com/questions/531025/dynamically-getting-setting-a-property-of-an-object-in-c-2005
// poco.GetType().GetProperty(key).SetValue(poco, collection[key], null);

System.Reflection.PropertyInfo propertyInfo = model.GetType().GetProperty(key);
if (propertyInfo != null)
{
// poco has the form field as a property
// convert from string to actual type
// http://stackoverflow.com/questions/1089123/c-setting-a-property-by-reflection-with-a-string-value

propertyInfo.SetValue(model, Convert.ChangeType(controllerContext.HttpContext.Request.Form[key], propertyInfo.PropertyType), null);

// InvalidCastException if failed.

}

}

return model;
}

Great. Now that we have our ModelBinder, we have to tell our MvcApplication to use it. We add it the following line to Application_Start():

            // Custom Model Binders
);