Category Archives: General

Svelte Complex Forms, part 3 – components using get/setContext() for less passing props

In my prior two posts, i created a more complex dynamic and hierarchical form, including radio buttons, and extracted the “sveltey” html into a couple components.

In this post, i’m refining my components a bit, to reduce the boilerplate attributes needed for each instance. Currently, they look like this:

<ZInput
	nameAttr="fullname"
	nameLabel="Full Name"
	bindValue={$form.fullname}
	errorText={$errors?.fullname}
	{handleChange}
/>

<ZRadio
	nameAttr="prefix"
	nameLabel="Prefix"
	itemList={prefixOptions}
	itemValueChecked="n/a"
	errorText={$errors?.prefix}
	{handleChange}></ZRadio>

I would like to remove the errorText and handleChange props, even the bindValue if possible.

To bypass explicit props just to reference the $errors object and the handleChange functions, i’ll use Svelte’s getContext() and setContext(). I borrowed it from the svelte-forms-lib optional components, but mine are different in that they also include a <label> and $errors indicator. For a great explanation of Svelte Context, see Tan Li Hau’s Store vs Context tweet/video.

In the form.svelte page, it contains the bare <form> element, and the createForm() call which returns $form, $errors, and the handle* functions. We add new setContext() call, setting those objects. This will allow child components to getContext() and access the same objects, without requiring them to be passed in via props.

const { form, errors, state, handleChange, handleSubmit, handleReset } 
  = createForm(formProps);

// allows for referencing the internal $form and $errors from this page, 
// so we can add the array handlers
setContext(key, {
	form,
	errors,
	handleChange,
	handleSubmit,
});

Now in the child components, add getContext() and modify the use.

ZInput <script> section:

// allows the Form* components to share state with the parent form
const { form, errors, handleChange } = getContext(key);

Then we won’t have to pass in those objects via prop. However, we still have a problem…

Impedance-mismatch: flat $errors keys vs. hierarchical $forms object

As discussed in my prior post, the underlying $form store is “flat”, but the underlying svelte store is hierarchical, meaning we have to refer to $form by that flat key. For example:

$form$errors
$form[‘fullname’]$errors.fullname
$form[‘profile.address’]$errors.profile.address

$form[‘contacts[0].name’]

$errors.contacts[0].name
$form[‘contacts[2].name’]$errors.contacts[2].name

I think this is the inevitable mismatch where the html form has a list of flat “name”s in a traditional POST, but the data it is handling is hierarchical.

If we try to use the key passed in for “name”, it won’t work to find the matching error message. So we have to convert the string key “contacts[2].name” to the equivalent object reference $errors.contacts[2].name in order to display the linked error message (if any).

I ended up using a function from stack overflow, which allows me to pass in an object, and a string key, which returns the value from the object the string key would point to:

// window.a = {b: {c: {d: {etc: 'success'}}}}
// getScopedObj(window, `a.b.c.d.etc`)             // success
// getScopedObj(window, `a['b']["c"].d.etc`)       // success
// getScopedObj(window, `a['INVALID']["c"].d.etc`) // undefined
export function getScopedObj(scope, str) {
    // console.log(`getScopedObj(scope, ${str})`);
    let obj = scope, arr;

    try {
        arr = str.split(/[\[\]\.]/) // split by [,],.
            .filter(el => el)             // filter out empty one
            .map(el => el.replace(/^['"]+|['"]+$/g, '')); // remove string quotation
        arr.forEach(el => obj = obj[el])
    } catch (e) {
        obj = undefined;
    }

    return obj;
}

Then my final component looks like this:

<script>
    import { getScopedObj } from "$lib/util";
    import { getContext } from "svelte";
    import { key } from "svelte-forms-lib";
    export let nameAttr;
    export let nameLabel;
    export let bindValue;
    // allows the Form* components to share state with the parent form
    const { form, errors, handleChange } = getContext(key);
</script>
<div>
    <label for={nameAttr}>{nameLabel}</label>
    <input
        placeholder={nameLabel}
        name={nameAttr}
        on:change={handleChange}
        on:blur={handleChange}
        bind:value={bindValue}
    />
    {#if getScopedObj($errors, nameAttr)}
        <div class="form-error">{getScopedObj($errors, nameAttr)}</div>
    {/if}
</div>

<ZRadio> can be modified in a similar way. Now the component tags are more concise:

<ZInput
	nameAttr={`contacts[${j}].email`}
	nameLabel="Email"
	bindValue={$form.contacts[j].email}
	/>

<ZRadio
	nameAttr={`contacts[${j}].contacttype`}
	nameLabel="Contact Type"
	itemList={contactTypes}
	itemValueChecked="n/a"
	/>

… and the user behavior is the same.

code for part 3 is at:

https://github.com/nohea/enehana-complex-svelte-form/tree/part3

Svelte Complex Forms, part 2 – refactoring into custom components

As a followup to my last post Svelte Complex Forms with radio buttons, dynamic arrays, and Validation (svelte-forms-lib and yup), i created a more complex dynamic and hierarchical form, including radio buttons. I was glad it worked, but not happy about the more complex array prefixing.

In this post, i will attempt to extract the “sveltey” html into a couple components. I’ll just name them with a Z-prefix for kicks.

  • a component with the label + input type=text + error (ZInput)
  • a component with the label + input type=radio + error (ZRadio)

Creating them under src/lib/c/*.svelte

The idea here is to supply all the variables in the component as props (a la Component
Format
), so the components are relatively generalized in the app, and we can set them in our existing each loops for the form.

ZInput : input type=text

Let’s start simple – the text input. Our first example looks like this:

<div>
	<label for="fullname"> Full Name </label>
	<input
		type="text"
		name="fullname"
		bind:value={$form.fullname}
		class=""
		placeholder="Full Name"
		on:change={handleChange}
		on:blur={handleChange}
	/>
	{#if $errors.fullname}
		<div class="error-text">{$errors.fullname}</div>
	{/if}
</div>

The props to extract could look to be:

  • nameAttr (fullname)
  • nameLabel (Full Name)
  • bindValue ($form.fullname)
  • errorText ($errors.fullname)
  • handleChange (needs to be referenced to the parent page/component)

Looking at the more nested example, the same props apply:

<div>
	<label for={`contacts[${j}].email`}>Email</label>
	<input
		placeholder="email"
		name={`contacts[${j}].email`}
		on:change={handleChange}
		on:blur={handleChange}
		bind:value={$form.contacts[j].email}
	/>
	{#if $errors?.contacts[j]?.email}
		<div class="error-text">{$errors.contacts[j].email}</div>
	{/if}
</div>

Here’s the component i will create:

<script>
    export let nameAttr;
    export let nameLabel;
    export let bindValue;
    export let errorText;
    export let handleChange;
</script>
<div>
    <label for={nameAttr}>{nameLabel}</label>
    <input
        placeholder={nameLabel}
        name={nameAttr}
        on:change={handleChange}
        on:blur={handleChange}
        bind:value={bindValue}
    />
    {#if errorText}
        <div class="error-text">{errorText}</div>
    {/if}
</div>

… and the ways to call it from the form.svelte page:

<ZInput nameAttr="fullname"
	nameLabel="Full Name"
	bindValue={$form.fullname} 
	errorText={$errors?.fullname} 
	handleChange={handleChange}></ZInput>
<ZInput nameAttr={`contacts[${j}].email`}
	nameLabel="Email"
	bindValue={$form.contacts[j].email} 
	errorText={$errors?.contacts[j]?.email} 
	handleChange={handleChange}></ZInput>

ZRadio – input type=radio in an each loop

The radio buttons are a little more, since we’ll have to supply an array of objects to the component, in a generic way.

The current code looks like this:

<div>
	<label for={`contacts[${j}].product_id`}>Product</label>
	{#each products as p, i}
		<label class="compact">
			<input
				type="radio"
				id={`contacts[${j}].product_id-${p.product_id}`}
				name={`contacts[${j}].product_id`}
				value={p.product_id}
				on:change={handleChange}
				on:blur={handleChange}
			/>
			<span> {p.product_name} [{p.product_id}]</span>
		</label>
	{/each}
	{#if $errors.contacts[j]?.product_id}
		<div class="error-text">{$errors.contacts[j].product_id}</div>
	{/if}
</div>

We’ll extract the following:

  • nameAttr
  • nameLabel
  • itemList [ { id, name, label, value} ]
  • itemValueChecked (if there is a pre-checked item – single choice)
  • errorText
  • handleChange

And we get…

<script>
    export let nameAttr;
    export let nameLabel;
    export let itemList;
    export let itemValueChecked;
    export let errorText;
    export let handleChange;

    function isChecked(checkedValue, itemValue) {
        if(checkedValue === itemValue) {
            return true;
        }
        else {
            return false;
        }
    }
</script>
<div>
    <label for={nameAttr}>{nameLabel}</label>
    {#each itemList as p, i}
        <label class="compact">
            <input
                type="radio"
                id={`${nameAttr}-${p.value}`}
                name={nameAttr}
                value={p.value}
                on:change={handleChange}
                on:blur={handleChange}
                checked={isChecked(itemValueChecked, p.value)}
            />
            <span> {p.label}{#if p.label != p.id}[{p.id}]{/if}</span>
        </label>
    {/each}
    {#if errorText}
        <div class="error-text">{errorText}</div>
    {/if}
</div>

Now the instantiation is a little more complex, as we’ll have to alter or remap the itemList objects to a consistent keys. For simple, non-object array lists, the id, name, label, and value are all the same. But the complex object lists, they are distinct.

The ZRadio component code:

<script>
    export let nameAttr;
    export let nameLabel;
    export let itemList;
    export let itemValueChecked;
    export let errorText;
    export let handleChange;

    function isChecked(checkedValue, itemValue) {
        if(checkedValue === itemValue) {
            return true;
        }
        else {
            return false;
        }
    }
</script>
<div>
    <label for={nameAttr}>{nameLabel}</label>
    {#each itemList as p, i}
        <label class="compact">
            <input
                type="radio"
                id={`${nameAttr}-${p.value}`}
                name={nameAttr}
                value={p.value}
                on:change={handleChange}
                on:blur={handleChange}
                checked={isChecked(itemValueChecked, p.value)}
            />
            <span> {p.label} [{p.id}]</span>
        </label>
    {/each}
    {#if errorText}
        <div class="error-text">{errorText}</div>
    {/if}
</div>

Remapping the item lists:

let prefixOptions = ['Ms.', 'Mr.', 'Dr.'];
let genderOptions = ['F', 'M', 'X'];
let contactTypes = ['friend', 'family', 'aquaintence'];

let products = [
	{ product_id: 101, product_name: 'Boots' },
	{ product_id: 202, product_name: 'Shoes' },
	{ product_id: 333, product_name: 'Jeans' }
];

onMount(() => {
	// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map#using_map_to_reformat_objects_in_an_array

	prefixOptions = simpleRemap(prefixOptions);
	genderOptions = simpleRemap(genderOptions);
	contactTypes = simpleRemap(contactTypes);

	products = products.map((element) => {
		return {
			id: element.product_id,
			name: element.product_name,
			label: element.product_name,
			value: element.product_id,
		};
	});
});

function simpleRemap(itemList) {
	return itemList.map(element => {
		return {
			id: element,
			name: element,
			label: element,
			value: element,
		};
	});
}

Then calling it:

<ZRadio
	nameAttr="prefix"
	nameLabel="Prefix"
	itemList={prefixOptions}
	itemValueChecked="n/a"
	errorText={$errors?.prefix}
	{handleChange}></ZRadio>
<ZRadio
	nameAttr={`contacts[${j}].contacttype`}
	nameLabel="Contact Type"
	itemList={contactTypes}
	itemValueChecked="n/a"
	errorText={$errors.contacts[j]?.contacttype}
	{handleChange}></ZRadio>

<ZRadio
	nameAttr={`contacts[${j}].product_id`}
	nameLabel="Product"
	itemList={products}
	itemValueChecked="n/a"
	errorText={$errors.contacts[j]?.product_id}
	{handleChange}></ZRadio>

Now the functionality should be exactly the same, but the form code is less messy and more readable.

The source code changes are in the same git repo as Part 1, but in a branch “part2”.

https://github.com/nohea/enehana-complex-svelte-form/tree/part2

Svelte Complex Forms with radio buttons, dynamic arrays, and Validation (svelte-forms-lib and yup)

Overview

Building new web apps in 2021 using a Svelte front-end is fun, with more reactivity and less code. Almost any web app will have some kind of form, and it helps to have a basic form builder and validation framework.

In this post, i’ll be exploring the svelte-forms-lib library to create a form, bound to a hierarchical object, and also wired up to a validation object. The form will support dynamically adding/removing items from an array property. It will also support radio buttons, which must be handled differently, since they are multiple <input> elements tied to the same variable.

Building a Complex Form, with svelte-forms-lib

The form i want to build will be a mix of property types:

  • Simple properties, such as ‘fullname’ (text input) and ‘prefix’ (radio button input)
  • A named object property (‘profile’), which will have a subsection for key/value pairs like ‘address’ and ‘gender’
  • A named array property (‘contacts’), which can contain zero or more contacts (with properties ‘name’, ’email’, and ‘contacttype’)

These various properties will have their own validation rules, which we will deal with later. They will also be sent to the backend on form submit, as a single JSON object. Something like this:

{
    fullname: 'Keoki Gonsalves',
    prefix: 'Mr.',
    profile: {
        address: '123 Main St.',
        gender: 'M'
    },
    contacts: [
        {
            contacttype: 'friend',
            name: 'Gina Kekahuna',
            email: 'ginak@example.com',
        },
        {
            contacttype: 'aquaintance',
            name: 'Marlon Waits',
            email: 'mwaits@example.com',
        },
    ]
}

Now that we’ve visualized our data model on the client-side, we can build a form to allow the user to populate that. Our challenge is to manage the slight impedance-mismatch between a form builder library and the object structure. Plus, allowing for an easy to use validation system.

Creating the svelte project, and creating the form with arrays and text inputs

I’ll create a vanilla sveltekit project, but it just needs to be svelte 3:

npm init svelte@next enehana-complex-svelte-form
cd enehana-complex-svelte-form
npm install
npm run dev -- --open

Making a page under /src/routes/form.svelte for this example. I will put as much as possible in this one page for simplicity’s sake, but normally we would split a few things off, as desired.

npm install svelte-forms-lib

We’ll start with a simple <script> section and the html form elements. Our example will be based on the svelte-forms-lib Forms Array example. Let’s just use regular text inputs to start, but do our array which will support multiple contacts on the form.

script section will call createForm() with the initial properties, and return a $form and $errors observable/store for linking the form elements with the JS object.

<script>
	import { createForm } from 'svelte-forms-lib';

	const formProps = {
		initialValues: {
			fullname: '',
			prefix: '',
			profile: {
				address: '',
				gender: ''
			},
			contacts: []
		},
		onSubmit: (values) => {
			console.log('onSubmit (via handleSubmit): ', JSON.stringify(values));
		}
	};

	const { form, errors, state, handleChange, handleSubmit, handleReset } = createForm(formProps);

	const addcontact = () => {
		console.log('addcontact()');
		$form.contacts = $form.contacts.concat({ name: '', email: '', contacttype: '' });
		$errors.contacts = $errors.contacts.concat({ name: '', email: '', contacttype: '' });
	};

	const removecontact = (i) => () => {
		$form.contacts = $form.contacts.filter((u, j) => j !== i);
		$errors.contacts = $errors.contacts.filter((u, j) => j !== i);
	};
</script>

Note there are add() and remove() functions for the contacts array and matching HTML form input sections.

The HTML form we will build out to match, w/css.

<main>
<div>
	<h1>Complex Svelte Form Example</h1>

	<h4>Test Form</h4>
	<form on:submit={handleSubmit}>
		<div>
			<label for="fullname"> Full Name </label>
			<input
				type="text"
				name="fullname"
				bind:value={$form.fullname}
				class=""
				placeholder="Full Name"
				on:change={handleChange}
				on:blur={handleChange}
			/>
		</div>

        <div>
			<label for="profile.address">Profile Address </label>
			<input
				type="text"
				name="profile.address"
				bind:value={$form.profile.address}
				class=""
				placeholder="Profile Address"
				on:change={handleChange}
				on:blur={handleChange}
			/>
		</div>

		<input type="submit" name="submit" value="submit button" />
	</form>
</div>

<div>
	<b>$form: </b>
	<pre>{JSON.stringify($form)}</pre>
</div>
<div>
	<b>$errors: </b>
	<pre>{JSON.stringify($errors)}</pre>
</div>

</main>

<style>
    label {
        display: inline-block;
        width: 200px;
    }

	.error-text {
		color: red;
	}
</style>

This is a simple 2 input form, but you can see the 2-way binding in action:

Now let’s add the dynamic contacts: [] array to the form. We loop using #each on the $form.contacts array, which we start empty. Each time we click “add”, an object is pushed to the array, which is bound to a new form group. Those inputs will be bound to the item of the array, based on their 0-based index value (0, 1, 2, …).

        <h4>Contacts</h4>
        {#each $form.contacts as contact, j}
          <div class="form-group">
            <div>
              <label for={`contacts[${j}].name`}>Name</label>
              <input
                name={`contacts[${j}].name`}
                placeholder="name"
                on:change={handleChange}
                on:blur={handleChange}
                bind:value={$form.contacts[j].name}
              />
            </div>
    
            <div>
                <label for={`contacts[${j}].email`}>Email</label>
                <input
                placeholder="email"
                name={`contacts[${j}].email`}
                on:change={handleChange}
                on:blur={handleChange}
                bind:value={$form.contacts[j].email}
              />
            </div>
    
            {#if $form.contacts.length === j + 1}
                <button type="button" on:click={removecontact(j)}>[- remove last contact]</button>
            {/if}
          </div>
        {/each}
    
        {#if $form.contacts}
            <div>
                <button on:click|preventDefault={addcontact}>[+ add contact]</button>
            </div>
        {/if}

The importance of the name=”” attribute matching the bound js object

We must keep in mind that the HTML form and the $form store is a more “flat” key/value pair data structure, whereas the object is it bound to is a dynamic javascript object, which can easily model hierarchical objects and arrays. This means the way we assign an <input name=””> needs to match the object. Otherwise, our form elements will modify the wrong sections of the object. I had a lot of trouble with this until i figured it out.

The <input> maps by the name=”” attribute, or the id=”” attribute if there is no name. The name/id attribute will be the key in the $form svelte store observable, as well as the matching $errors store.

Examples of the 2-way binding between form and objects:

I try to keep the naming as clear as possible.

formobject
<input name=”fullname” bind:value={$form.fullname} />$form.fullname
<input name=”profile.address” bind:value={$form.profile.address} />$form.profile.address
<input name=”contacts[0].name” bind:value={$form.contacts[0].name} />$form.contacts[0].name
{#each $form.contacts as c, x}
<input name={`contacts[${x}].name`} bind:value={$form.contacts[x].name} />
{/each}
$form.contacts[x].name
{#each $form.contacts as c, x}
{#each contactTypes as ct, y}
<label>
<input type=”radio” name={`contacts[${x}].contacttype`} value={ct} /> {ct}
</label>
{/each}
{/each}
$form.contacts[x].contacttype

It can get a little complicated on the html form side, but i like it clear on the javascript side. Theoretically, the HTML could be wrapped into a svelte component to make the syntax cleaner. Let’s leave that to another day.

Adding in validation using ‘yup’

yup is a form validation library, inspired by Eran Hammer‘s joi.

It seems real simple.

  • npm i yup
  • import * as yup from ‘yup’;
  • define your schema declaratively
  • set it as the validationSchema in the svelte-forms-lib createForm
  • throw in the html $errors next to the form fields, for a visual feedback on invalid data

The validation will run at <form on:submit={handleSubmit}> by svelte-forms-lib, and optionally at the input form element level if you add the on:change={handleChange} and/or on:blur={handleChange} svelte attributes.

Add in a validator schema:

    const validator = yup.object().shape({
        fullname: yup.string().required(),
        prefix: yup.string(),
        profile: yup.object().shape({
            address: yup
                .string()
                .required(),
            gender: yup
                .string()
        }),
        contacts: yup.array().of(
            yup.object().shape({
                contacttype: yup.string(),
                name: yup.string().required(),
                email: yup.string(),
            })
        )
    });

adding a validationSchema property to formProps:

const formProps = {
    ...
    validationSchema: validator,
    ...
}

then adding the error/validation messages near the fields:

{#if $errors.fullname}
	<div class="error-text">{$errors.fullname}</div>
{/if}

and for the deeply-nested fields, i found they often have missing property errors, so i’m using the new javascript optional chaining operator ?.

{#if $errors?.contacts[j]?.name}
  <div class="error-text">{$errors.contacts[j].name}</div>
{/if}

Now we’ve got this working:

Note that onSubmit() doesn’t fire until all the forms pass yup validation.

Handling radio buttons and checkboxes

Radio buttons and checkboxes require special handling. At first i thought i had to wire up my own idiom between svelte-forms-lib and svelte bind:group handler, but it turns out not to be the case.

Sometimes a radio, checkbox, or select drop down will have a list of simple values. In other cases, there could be complex values, in the cases where a list of items is pulled from a databases. There could be a product_id to store, but a product_name to display. I’m going to try examples of each.

The simple examples will be prefixes and genderOptions. We defined them as simple arrays:

const prefixOptions = ['Ms.', 'Mr.', 'Dr.'];
const genderOptions = ['F', 'M', 'X'];
const contactTypes = ['friend', 'family', 'aquaintence'];

The complex example:

    const products = [
        { product_id: 101, product_name: "Boots", },
        { product_id: 202, product_name: "Shoes", },
        { product_id: 333, product_name: "Jeans", },
    ];

For ‘prefix’, we have a similar label, but instead of one <input>, we get one for each option. So we loop thru the options using an {#each} loop, being careful to:

  • set all name=”” attributes to the same input name
  • set the value=”” to the actual value to store in the variable
  • use the on:change={handleChange} handler
<div>
	<label for="prefix"> Prefix </label>
	{#each prefixOptions as pre, i}
		<label class="compact">
			<input id={`prefix-${pre}`} 
			name="prefix" 
			value={pre}
			type="radio" 
			on:change={handleChange}
			on:blur={handleChange}
			/>
		<span> {pre} </span>
		</label>
	{/each}
	{#if $errors.prefix}
		<div class="error-text">{$errors.prefix}</div>
	{/if}
</div>

For $form.profile.gender, it is almost identical, but the naming must follow one level deeper:

<div>
	<label for="profile.gender"> Profile Gender</label>
	{#each genderOptions as g, i}
		<label class="compact">
			<input id={`prefix-${g}`} 
			name="profile.gender" 
			value={g}
			type="radio" 
			on:change={handleChange}
			on:blur={handleChange}
			/>
		<span> {g} </span>
		</label>
	{/each}
	{#if $errors.profile?.gender}
		<div class="error-text">{$errors.profile.gender}</div>
	{/if}
</div>

And with Contact Type, we need to include the array indexer in the name=”” attribute, so we don’t stomp on values from other array items. It is already inside another {#each} loop, iterating over $form.contacts

<div>
    <label for={`contacts[${j}].contacttype`}>Contact Type</label>
    {#each contactTypes as ct, i}
        <label class="compact">
            <input 
            type="radio" 
            id={`contacts[${j}].contacttype-${ct}`} 
            name={`contacts[${j}].contacttype`}
            value={ct}
            on:change={handleChange}
            on:blur={handleChange}
            />
        <span> {ct} </span>
        </label>
    {/each}
    {#if $errors.contacts[j]?.contacttype}
        <div class="error-text">{$errors.contacts[j].contacttype}</div>
    {/if}
</div>

Finally, let’s combine the array radio with a complex list of items: the ID will be the value, but the display will be a name or description

<div>
	<label for={`contacts[${j}].product_id`}>Product</label>
	{#each products as p, i}
		<label class="compact">
			<input 
			type="radio" 
			id={`contacts[${j}].product_id-${p.product_id}`} 
			name={`contacts[${j}].product_id`}
			value={p.product_id}
			on:change={handleChange}
			on:blur={handleChange}
			/>
		<span> {p.product_name} [{p.product_id}]</span>
		</label>
	{/each}
	{#if $errors.contacts[j]?.product_id}
		<div class="error-text">{$errors.contacts[j].product_id}</div>
	{/if}
</div>

Conclusion

My takeaway is that based on this test of more complex form building and validation using svelte, i’m now confident i could build larger web apps the way i expect — with validation and dynamic forms, including arrays.

I’d like to improve and refactor the examples into components— either the ones provided, or making my own.

References

https://svelte-forms-lib-sapper-docs.vercel.app/array

Nefe James – Top form validation libraries in Svelte

Source code at github

https://github.com/nohea/enehana-complex-svelte-form part 1

How to connect Hasura GraphQL real-time Subscription to a reactive Svelte frontend using RxJS and the new graphql-ws Web Socket protocol+library

By Raul Nohea Goodness
https://twitter.com/rngoodness
November 2021

Overview

I am in the middle of my about once or twice-a-decade process of reevaluating my entire web software development tools and approaches. I’m using a number of great new tools, but a new little JS lib i’m starting to use to tie them all together, hopefully in a resilient way: graphql-ws

The target audience for this post is a web developer using a modern GraphQL backend (Hasura in this case) and a modern reactive javascript/html front-end (in this example, Svelte). 

This post is about the reasons for my tech stack choices, and how to use graphql-ws to elegantly tie them together. 

Software Stack Curation

What is Hasura? Why use it?

Hasura, in my mind, is a revolutionary new backend data server. It sits in front of a database (PostgreSQL or others), and provides:

  • Instant GraphQL APIs (which are typed)
  • Configurable Authorization of resources, integrated at the SQL composition level
  • Bring your own Authentication/Authorization provider (using JWTs or not), such as NHost, Auth0, Firebase, etc. 
  • Integrate with other GraphQL sources
  • Integrate hooks with serverless functions/lambda
  • Open Source (self-host or cloud service)
  • Local CLI for developers

This can eliminate extensive hand-coding of backend REST APIs, with the custom cross-cutting concerns, like Auth. Also replaces the need for OR/M-style data access in backend code. 

What is GraphQL? Why use it?

Now, this was my question a couple years ago, until i saw Hasura v1. Now i can answer it. 

From graphql.org

A query language for your API
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

In slightly more normal coder-speak, it is a defacto standard for querying and mutating (insert/update/delete) a data source over the web, sending a GQL statement, executing it, and receiving a response in JSON. 

GraphQL consoles are “aware” of the underlying database types, which makes it easy to:

  • Compose queries in a console like “graphiql” and test them
  • Then copy/paste your GQL into your Javascript client code editor, for run-time execution

Arguably, this is less work than hand-coding the SQL or ORM code into your REST endpoint code. The JSON response comes for free. GraphQL also makes it easy to merge multiple data sources into a single JSON response. 

What is Apollo Client? Why use it, and why would i not use it?

Apollo is a GraphQL client and server library. It is popular and there are many code examples for developers available. Since i am using Hasura i don’t need the Apollo Server, but i could use the Apollo Client to access my backend. 

My initial tests worked with it. However the Apollo Client also has its own state-management system for caching gql queries and responses. It seemed like an overkill solution for my uses. I’m sure it works for other projects, but since the new concept count (to learn) in this case is already high, i opted to not use it. 

Instead i started using a more lightweight library: https://graphql.org/graphql-js/graphql/

This worked good and was simple to understand, but only for queries and mutations, not subscriptions. 

For gql subscriptions, there was a different library: apollographql / subscriptions-transport-ws . It is for graphql subscriptions over web sockets. We would want this in the case of a web UX which listens for changes in the underlying data, and reactively updates when it changes on the server. 

What is graphql-ws? Why use it instead of subscriptions-transport-ws? 

subscriptions-transport-ws does work, but there are 3 reasons not to use it:

  • Bifurcated code – you have to use one lib for gql queries+mutations, and another for subscriptions
  • graphql-ws implements a more standard GraphQL over WebSocket Protocol, using ping/pong messages, instead of subscriptions-transport-ws GCL_* messages. 
  • Apparently subscriptions-transport-ws is no longer actively maintained by the original developers, and they recommend using graphql-ws on their project page.

Note that Hasura added support for graphql-ws protocol as of v2.0.8

What are graphql subscriptions?

From the Hasura docs:

Subscriptions – Realtime updates

The GraphQL specification allows for something called subscriptions that are like GraphQL queries but instead of returning data in one read, you get data pushed from the server.

This is useful for your app to subscribe to “events” or “live results” from the backend, while allowing you to control the “shape” of the event from your app.

GraphQL subscriptions are a critical component of adding realtime or reactive features to your apps easily. GraphQL clients and servers that support subscriptions allow you to build great experiences without having to deal with websocket code!

Put another way, our front-end UX can “listen” for changes on the backend, and the backend will send the changes to the frontend over the web socket in real time. The “reactive” frontend can instantly re-render the update to the user. 

Not all graphql queries require using a subscription, but if we do use them, coding them will be much simpler to write and maintain. 

What is Svelte? Why use it?

Svelte is a Javascript front-end reactive framework (not unlike React), but it is succinct and performant, and implemented as a compiler (to JS). Plus, it is fun to learn and code in. I’m talking 1999- era fun 😊

I recommend watching Rich Harris’ talk: Rethinking Reactivity

You can use a different frontend framework. But Svelte makes it easy due to svelte “stores” implementing the observable contract– the subscribe() method. Components will reactively update if the underlying object being observed changes. 

What are Javascript Observables? RxJS?

RxJS is a library for reactive programming using Observables, to make it easier to compose asynchronous or callback-based code. 

We don’t need RxJS to use observables, but it is a popular library. I used it with Angular in the past, and one of the simplest graphql-ws examples uses it, so i am too. 

In short, in Javascript you call an observable’s subscribe() method to listen for/handle updates in the observable’s value. 

The wire-up: two-way reactive front-end to backend using JS Observables + GraphQL Subscriptions over Web Sockets

The idea here is to render the rapidly-changing data in an HTML component for the user to just watch updates, without having to do anything. 

Design – Focus-group input slider

This proof-of-concept will be a slider for use in a “focus group”. A group of participants get in a room and watch a video or debate, and “dial” in real-time their opinion (positive or negative) as things are being said. This example will just be a single person’s input being displayed or charted. 

  • The data captured will include: focus group id (text), username, rating (integer – 0 to 100), and datetime of event. 
  • UI will include:
    • A start/stop button, to start recording rating records, in 1 second increments. 
    • A slider, which goes from 0 (negative/disagree) to 100 (positive/agree), default setting is 50 (neutral)
  • A grid/table will display the last 10 records recorded in real-time (implemented as a graphql subscription). 
  • Optional: implement a chart component which updates in real-time from the data set. 

Diagram

Code

Setup

npm init svelte@next fgslider
cd fgslider
code .

PostgreSQL

I want the table to look like this:

create table ratingtick (
   id serial,
   focusgroup text not null,
   username text not null,
   rating integer not null,
   tick_ts timestamp with time zone not null default current_timestamp
);
 
-- insert into ratingtick(focusgroup, username, rating) values ('pepsi ad', 'ekolu', 50);
-- insert into ratingtick(focusgroup, username, rating) values ('pepsi ad', 'ekolu', 65);
-- insert into ratingtick(focusgroup, username, rating) values ('pepsi ad', 'ekolu', 21);

In this case, i’m going to do it on my local machine. I’m also going to create the Hasura instance locally using hasura-cli. Of course, you can do this on your local infrastructure or your own servers or cloud provider, or the specialized NHost.io

Hasura

I’m going to create a Hasura container locally, which will also have a PostgreSQL v12 instance. 

sudo apt install docker-compose docker

docker-compose up -d

If you get a problem, just tweak docker-compose.yml. I changed the port from 8080:8080 to 8087:8080

Connect to the Hasura web console:
http://localhost:8087

Connect the Hasura container instance to the Postgresql container instance:

Grab the database URL from docker-compose.yml and connect the database:

You will now see ‘pgcontainer’ in the databases list. 

With Hasura, you can either create the Postgres schema first, then tell Hasura to scan the schema. Or create the schema in the Hasura console, which will execute the DDL on Postgres. Pick one or the other. 

For this project, we’ll skip permissions, or more accurately, we’ll configure a ‘public’ role on the table, and allow select/insert/update/delete permissions. 

Note: i had to add HASURA_GRAPHQL_UNAUTHORIZED_ROLE: public to the environment: section of docker-compose.yml and run “docker-compose up -d” to make it reload with the setting change, to treat anonymous requests as ‘public’ role (no need for x-hasura-role header). 

Let’s now test GraphQL queries in “GraphIQL” tool. We should be able to set the x-hasura-role  header to ‘public’ and still query/mutate. Setting the role header requires Hasura to evaluate the authorization according to that role. (note i did have problems getting the role header to work, so i instead made ‘public’ the default anonymous role).

We should also be able to insert via a mutation:

mutation MyMutation {
  insert_ratingtick_one(object: {focusgroup: "pepsi ad", username: "ekolu", rating: 50}) {
    id
  }
}

Response:

{
  "data": {
    "insert_ratingtick_one": {
      "id": 1
    }
  }
}

That means it inserted and returned the ‘id’ primary key of 1. 

After inserting a few, we can query again:

{
  "data": {
    "ratingtick": [
      {
        "id": 1,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 50,
        "tick_ts": "2021-11-16T22:56:07.094606+00:00"
      },
      {
        "id": 2,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 45,
        "tick_ts": "2021-11-16T22:57:56.323054+00:00"
      },
      {
        "id": 3,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 98,
        "tick_ts": "2021-11-16T22:58:01.047135+00:00"
      },
      {
        "id": 4,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 96,
        "tick_ts": "2021-11-16T22:58:09.495674+00:00"
      },
      {
        "id": 5,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 43,
        "tick_ts": "2021-11-16T22:58:17.550324+00:00"
      },
      {
        "id": 6,
        "focusgroup": "pepsi ad",
        "username": "ekolu",
        "rating": 23,
        "tick_ts": "2021-11-16T22:58:25.547917+00:00"
      }
    ]
  }
}

Finally, let’s test the subscription. We should be able to open the insert mutation in one window, and see the subscription update in real time in the second window. 

Good. At this point, i’m confident all is working on the Hasura end. Time to work on the front-end code. 

Svelte + graphql-ws

Please note that although i am using Svelte with graphql-ws , you can use any JS framework, or vanilla JS. 

Remember, we created this directory as a sveltekit project, so now we’ll build on it. We do need to “npm install” to install the node dependencies. Then we can “npm run dev” which will run the dev http server on localhost:3000

  • Create a new /slider route as src/routes/slider/index.svelte
  • Add form inputs, and a slider widget
  • Add a display grid which will display the last 10 tick records

SvelteKit uses Vite for modern ES6 module builds, which uses dotenv-style .env, but with the VITE_* prefix. So we create a .env file with entry like so:

VITE_TEST="this is a test env"
VITE_HASURA_GRAPHQL_URL=ws://localhost:8087/v1/graphql

Note: you must change the URI protocol from http://localhost:8087/v1/graphql to ws://localhost:8087/v1/graphql , in order to use graphql-ws. It is not normal http, it is web sockets (ws://) or ws secure (wss://). Otherwise, you get an error: [Uncaught (in promise) DOMException: An invalid or illegal string was specified client.mjs:140:12]

Then you can refer to them in your app via the import.meta.env.* namespace (src/routing/index.svelte):

Now let’s get into the “fish and poi” a/k/a “meat and potatoes” section, the src/routes/slider/index.svelte page. 

First, the start/stop button, form elements and slider widget. Keeping it simple, 

I will install a custom svelte slider component:

npm install svelte-range-slider-pips --save-dev

Also installing rxjs, for the timer() and later for wrapping the graphql-ws handle. 

npm install rxjs

The first version here is basically a svelte app only, not using any backend yet:

<script>
import { text } from "svelte/internal";
import RangeSlider from "svelte-range-slider-pips";
import { timer } from 'rxjs';
 
let runningTicks = false;
let focusGroupName = "pepsi commercial";
let userName = "ekolu";
let sliderValues = [50]; // default
let tickLog = "";
 
let timerObservable;
let timerSub;
 
function timerStart() {
   runningTicks = true;
   timerObservable = timer(1000, 1000);
 
   timerSub = timerObservable.subscribe(val => {
       tickLog += `tick ${val}... `;
   });
}
 
function timerStop() {
   timerSub.unsubscribe();
   runningTicks = false;
}
 
</script>
 
<h1>Slider</h1>
<p>
   enter your focus group, name and click 'start'.
</p>
<p>
   Once it starts, move the slider depending on how much you
   agree/disagree with the video.
</p>
 
<form>
<label for="focusgroup">focus group: </label><input type="text" name="focusgroup" bind:value={focusGroupName} />
<label for="username">username: </label><input type="text" name="focusgroup" bind:value={userName} />
 
<label for="ratingslider">rating slider (0 to 100): </label>
 
<RangeSlider name="ratingslider" min={0} max={100} bind:values={sliderValues} pips all='label' />
<div>0 = bad/disagree, 50 = neutral, 100 = good/agree</div>
<div>slider Value: {sliderValues[0]}</div>
 
<button disabled={runningTicks} on:click|preventDefault={timerStart}>Start</button>
<button disabled={!runningTicks} on:click|preventDefault={timerStop}>Stop</button>
</form>
<div>
   Tick output: {tickLog}
</div>
 
<div>
   <a href="/">Home</a>
</div>

I got a number of things working together here:

  • Variables bound to UI components
  • A slider component which will have values from 0 to 100, bound to variable
  • An rxjs timer(), which executes a callback every second, bound to the start/stop buttons

Now i’m ready to hook up the graphql mutation and subscription. 

npm install graphql-ws

I’m going to create src/lib/graphql-ws.js to manage the client setup and subscription creation. 

import { createClient } from 'graphql-ws';
import { observable, Observable } from 'rxjs';

export function createGQLWSClient(url) {
    // console.log(`createGQLWSClient(${url})`);
    return createClient({
        url: url,
    });
}

export async function createQuery(client, gql, variables) {
    // query
    return await new Promise((resolve, reject) => {
        let result;
        client.subscribe(
            {
                query: gql,
                variables: variables
            },
            {
                next: (data) => (result = data),
                error: reject,
                complete: () => resolve(result)
            }
        );
    });
}

export async function createMutation(client, gql, variables) {
    // same as query
    return createQuery(client, gql, variables);
}

export function createSubscription(client, gql, variables) {
    // hasura subscription
    // console.log("createSubscription()");
    const operation = {
        query: gql,
        variables: variables,
    };
    const rxjsobservable = toObservable(client, operation);
    // console.log("rxjsobservable: ", rxjsobservable);
    return rxjsobservable;
}

// wrap up the graphql-ws subscription in an observable
function toObservable(client, operation) {
    // console.log("toObservable()");
    // the graphql-ws subscription may be cleaned up here, 
    // not sure about the RxJs Observable
    // trying to make it more like the docs, w/custom unsubscribe() on subscription object
    // https://rxjs.dev/guide/observable
    return new Observable(function subscribe(subscriber) {
        client.subscribe(operation, {
            next: (data) => subscriber.next(data),
            error: (err) => subscriber.error(err),
            complete: () => subscriber.complete()
        });

        // Provide a way of canceling and disposing resources
        return function unsubscribe() {
            console.log("unsubscribe()");
        };
    });
}

Now we are going to:

  • setup the client in the index.svelte onMount() handler, 
  • execute createSubscription() in the onMount() handler and bind to a new grid/table component
  • execute createMutation() on every tick with the current values
// browser-only code
onMount(async () => {
	// setup the client in the index.svelte onMount() handler
	gqlwsClient = createGQLWSClient(import.meta.env.VITE_HASURA_GRAPHQL_URL);

	// execute createSubscription() in the onMount() handler

	// and bind to a new grid/table component
	// src/components/TopTicks.svelte
	const gql = `subscription MySubscription($limit:Int) {
ratingtick(order_by: {id: desc}, limit: $limit) {
id
focusgroup
username
rating
tick_ts
}
}`;
	const variables = { limit: 5 }; // how many to display
	const rxjsobservable = createSubscription(
		gqlwsClient,
		gql,
		variables
	);
	// const subscription = rxjsobservable.subscribe(subscriber => {
	// 	console.log('subscriber: ', subscriber);
	// });
	// console.log('subscription: ', subscription);
	// gqlwsSubscriptions.push(subscription);
	gqlwsObservable = rxjsobservable;
});

Timer start

   function timerStart() {
       runningTicks = true;
       timerObservable = timer(1000, 1000);
 
       timerSub = timerObservable.subscribe((val) => {
           tickLog += `tick ${val}... `;
 
           // execute createMutation() on every tick with the current values
           submitLatestRatingTick(gqlwsClient);
       });
   }

Functions to do the work:

   function submitLatestRatingTick(client) {
       const gql = `mutation MyMutation($focusgroup:String, $username:String, $rating:Int) {
 insert_ratingtick_one(object: {focusgroup: $focusgroup, username: $username,
   rating: $rating}) {
   id
 }
}
`;
       const variables = buildRatingTick();
 
       createMutation(client, gql, variables);
   }
 
   function buildRatingTick() {
       return {
           focusgroup: focusGroupName,
           username: userName,
           rating: sliderValues[0]
       };
   }

Note, we can test the gql in GraphIQL and copy/paste into the JS template strings, also using the variable syntax. 

If that all works, we will have a sweet reactive graphql-ws app. 

Got it all working now! 😎

Animated GIF version:

Connecting it to a reactive chart is left as an exercise for the reader. 

github

find the source for this app here:
https://github.com/nohea/enehana-fgslider

References

graphql-ws: GraphQL over WebSocket Protocol compliant server and client.
https://github.com/enisdenjo/graphql-ws

SpinSpire: live code: Svelte app showing realtime Postgres data changes (GraphQL subscriptions)

Hasura – local docker install

Svelte

Partial Updates with HTTP PATCH using ServiceStack.net and the JSON Patch format (RFC 6902)

I have been looking into implementing partial updates using the HTTP PATCH method using ServiceStack.net and the JSON Patch format (RFC 6902)

This is of interest since many updates do not neatly match the PUT method, which often is used for full entity updates (all properties). PATCH is intended to do one or more partial updates. There are a few blogs describing the use cases.

I’ve been happy using ServiceStack the way it was designed – RESTful, simple, using Message Based designs.

I could implement PATCH using my own message format – that is easy to do. Usually it would be the actual DTO properties, plus a list of fields which are actually going to be updated. You wouldn’t update all fields, and you don’t want to only update non-null properties, since sometimes “null” is a valid value for a property (it would be impossible to set a property to null from non-null).

In my opinion, using JSON Patch for the Request body has pros and cons.
Pros:

  • is an official RFC
  • covers a lot of use cases

Cons:

  • very generic, so we lose some of the benefit of strong typing
  • doesn’t have a slot for the Id of a resource when calling PATCH /employees/{Id}
    • doing this the “JSON Patch way” would be { “op”: “replace”, “path”: “/employees/123/title”, “value”: “Administrative Assistant” } , but that wastes the value of having it on the routing path.

JSON Patch supports a handful of operations: “add”, “remove”, “replace”, “move”, “copy”, “test”. I will focus on the simple “replace” op, since it easily maps to replacing a property on a DTO (or field in a table record).

The canonical example looks like this:

PATCH /my/data HTTP/1.1
Host: example.org
Content-Length: 55
Content-Type: application/json-patch+json
If-Match: "abc123"

[
    { "op": "replace", "path": "/a/b/c", "value": 42 }
]

I’m going to ignore the If-Match: / ETag: headers for now. Those will be useful if you want to tell the server to only apply your changes if the resource still matches your “If-Match” header (no changes in the meantime). “That exercise is left to the reader.”

Let’s say we have a more practical example:

  • an Employee class, backed by an [Employee] table, accessed by OrmLite
  • an EmployeeService class, implementing the PATCH method
  • the Request DTO to the Patch() method aligns to the JSON Patch structure

The Employee class would simply look like this (with routing for basic CRUD):

[Route("/employees", "GET,POST")]
[Route("/employees/{Id}", "GET,PUT")]
public class Employee
{
    public long Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }
    public string Title { get; set; }
    public int? CubicleNo { get; set; }
    public DateTime StartDate { get; set; }
    public float Longitude { get; set; }
    public float Latitude { get; set; }
}

Now the shape of JSON Patch replace ops would look like this:

PATCH /employees/123 HTTP/1.1
Host: example.org
Content-Type: application/json

[
    { "op": "replace", "path": "/title", "value": "Junior Developer" },
    { "op": "replace", "path": "/cubicleno", "value": 23 },
    { "op": "replace", "path": "/startdate", "value": "2013-06-02T09:34:29-04:00" }
]

The path is the property name in this case, and the value is what to update to.

And yes, i also know i am sending Content-Type: application/json instead of Content-Type: application/json-patch+json . We’ll have to get into custom content type support later too.

Now, sending a generic data structure as the Request DTO to a specific resource ID doesn’t cleanly map to the ServiceStack style, because:

  • each Request DTO should be a unique class and route
  • there is not a field in the Request for the ID of the entity

The simple way to map the JSON to a C# class would define an “op” element class, and have a List<T> of them, like so:

public class JsonPatchElement
{
    public string op { get; set; } // "add", "remove", "replace", "move", "copy" or "test"
    public string path { get; set; }
    public string value { get; set; }
}

We create a unique Request DTO so we can route to the Patch() service method.

[Route("/employees/{Id}", "PATCH")]
public class EmployeePatch : List<JsonPatchElement>
{
}

But how do we get the #$%&& Id from the route?? This code throws RequestBindingException! But i can’t change the shape of the PATCH request body from a JSON array [].
The answer was staring me in the face: just add it to the DTO class definition, and ServiceStack will map to it. I was forgetting the C# class doesn’t have to be the same shape as the JSON.

[Route("/employees/{Id}", "PATCH")]
public class EmployeePatch : List<JsonPatchElement>
{
    public long Id { get; set; }
}

Think of this class as a List<T> with an additional Id property.

When the method is called, the JSON Patch array is mapped and the Id is copied from the route {Id}.

public object Patch(EmployeePatch dto)
{
    // dto.Id == 123
    // dto[0].path == "/title"
    // dto[0].value == "Joe"
    // dto[1].path == "/cubicleno"
    // dto[1].value == "23"

The only wrinkle is all the JSON values come in as C# string, even if they are numeric or Date types. At least you will know the strong typing from your C# class, so you know what to convert to.

My full Patch() method is below– note the partial update code uses reflection to update properties of the same name, and does primitive type checking for parsing the string values from the request DTO.

public object Patch(EmployeePatch dto)
{
    // partial updates

    // get from persistent data store by id from routing path
    var emp = Repository.GetById(dto.Id);

    if (emp != null)
    {
        // read from request dto properties
        var properties = emp.GetType().GetProperties();

        // update values which are specified to update only
        foreach (var op in dto)
        {
            string fieldName = op.path.Replace("/", "").ToLower(); // assume leading /slash only for example

            // patch field is in type
            if (properties.ToList().Where(x => x.Name.ToLower() == fieldName).Count() > 0)
            {
                var persistentProperty = properties.ToList().Where(x => x.Name.ToLower() == fieldName).First();

                // update property on persistent object
                // i'm sure this can be improved, but you get the idea...
                if (persistentProperty.PropertyType == typeof(string))
                {
                    persistentProperty.SetValue(emp, op.value, null);
                }
                else if (persistentProperty.PropertyType == typeof(int))
                {
                    int valInt = 0;
                    if (Int32.TryParse(op.value, out valInt))
                    {
                        persistentProperty.SetValue(emp, valInt, null);
                    }
                }
                else if (persistentProperty.PropertyType == typeof(int?))
                {
                    int valInt = 0;
                    if (op.value == null)
                    {
                        persistentProperty.SetValue(emp, null, null);
                    }
                    else if (Int32.TryParse(op.value, out valInt))
                    {
                        persistentProperty.SetValue(emp, valInt, null);
                    }
                }
                else if (persistentProperty.PropertyType == typeof(DateTime))
                {
                    DateTime valDt = default(DateTime);
                    if (DateTime.TryParse(op.value, out valDt))
                    {
                        persistentProperty.SetValue(emp, valDt, null);
                    }
                }

            }
        }

        // update
        Repository.Store(emp);

    }

    // return HTTP Code and Location: header for the new resource
    // 204 No Content; The request was processed successfully, but no response body is needed.
    return new HttpResult()
    {
        StatusCode = HttpStatusCode.NoContent,
        Location = base.Request.AbsoluteUri,
        Headers = {
            // allow jquery ajax in firefox to read the Location header - CORS
            { "Access-Control-Expose-Headers", "Location" },
        }
    };
}

For an example of calling this from the strongly-typed ServiceStack rest client, my integration test looks like this:

[Fact]
public void Test_PATCH_PASS()
{
    var restClient = new JsonServiceClient(serviceUrl);

    // dummy data
    var newemp1 = new Employee()
    {
        Id = 123,
        Name = "Kimo",
        StartDate = new DateTime(2015, 7, 2),
        CubicleNo = 4234,
        Email = "test1@example.com",
    };
    restClient.Post<object>("/employees", newemp1);

    var emps = restClient.Get<List<Employee>>("/employees");

    var emp = emps.First();

    var empPatch = new Operations.EmployeePatch();
    empPatch.Add(new Operations.JsonPatchElement()
    {
        op = "replace",
        path = "/title",
        value = "Kahuna Laau Lapaau",
    });

    empPatch.Add(new Operations.JsonPatchElement()
    {
        op = "replace",
        path = "/cubicleno",
        value = "32",
    });

    restClient.Patch<object>(string.Format("/employees/{0}", emp.Id), empPatch);

    var empAfterPatch = restClient.Get<Employee>(string.Format("/employees/{0}", emp.Id));

    Assert.NotNull(empAfterPatch);
    // patched
    Assert.Equal("Kahuna Laau Lapaau", empAfterPatch.Title);
    Assert.Equal("32", empAfterPatch.CubicleNo.ToString());
    // unpatched
    Assert.Equal("test1@example.com", empAfterPatch.Email);
}

I am uploading this code to github a full working Visual Studio 2013 project, including xUnit.net tests.

I hope this has been useful to demonstrate the flexibility of using ServiceStack and C# to implement the HTTP PATCH method using JSON Patch (RFC 6902) over the wire.

Update: i refactored the code so that any object can have it’s properties “patched” from a JsonPatchRequest DTO by using an extension method populateFromJsonPatch().

public object Patch(EmployeePatch dto)
{
    // partial updates
    // get from persistent data store by id from routing path
    var emp = Repository.GetById(dto.Id);

    if (emp != null)
    {
        // update values which are specified to update only
        emp.populateFromJsonPatch(dto);

        // update
        Repository.Store(emp);

 

 

Using Handlebars.js templates as precompiled JS files

I’ve previously used Handlebars templates in projects, but only in the simple ways– i defined a <script> block as inline html templates, and used in my js code.

However, i have a project where i need all the code, including html templates, as js files.
Luckily Handlebars can do this, but we’ll need to set up the proper node-based build environment to do so.

  • node.js
  • gulp task runner
  • bower for flat package management
  • handlebars for templates

The templates will get “precompiled” by gulp, resulting in a pure js file to include in the html page. Then we’ll be able to code in HTML, but deploy as JS.

First i create a new empty ASP.NET Web project in Visual Studio. I’ll call it: HandlebarsTest. Note that almost none of this is Visual Studio-specific, so 95% is applicable to any other development environment.

Next, i will set up Gulp and Bower, similar to how i did it in my 2 prior posts:

I will create the gulpfile.js like so (we’ll add it it):

var gulp = require('gulp');

gulp.task('default', ['scripts'], function() {

});

gulp.task('scripts', function() {
});

Open the node command prompt, and change to the new directory

cd HandlebarsTest\HandlebarsTest
npm init
npm install -g gulp
npm install gulp --save-dev
npm install gulp-uglify --save-dev
npm install gulp-concat --save-dev
npm install gulp-wrap --save-dev
npm install gulp-declare --save-dev

I will create the .bowerrc file like so:

{
    "directory": "js/lib"
}

OK, now for some handlebars stuff. One thing to understand is we need to do handlebars stuff at build/compile time AND at runtime. That means:

  • the precompilation will be run by gulp during build time (install gulp-handlebars using npm), and
  • the web browser will execute the templates with the handlebars-runtime library (install to the project using Bower)
npm install gulp-handlebars --global
npm install gulp-handlebars --save-dev

Bower (client-side) packages

I will use Bower to install the client side libs: handlebars, jquery, etc. First, create the bower.json file.

bower init

Next, start installing!

bower install jquery
bower install handlebars

Those files get installed to /js/lib/* , per my .bowerrc file. Now we can reference them in scripts, or use them for js bundles.

HTML, Javascript, and Handlebars templates together.

My use-case is to:

  1. Have a static HTML page
  2. Include a script tag which loads a single JS file
  3. The single JS file will load/contain the libraries AND the main execution code
  4. the main execution code will render a DIV element which renders a Handlebars template with an object.

HTML page just includes a single JS , which will be built:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Handlebars Test</title>
    <script type="text/javascript">
    (function() {
        function async_load(){
            var cb = 'cb=' +(new Date).getTime();

            var rmist = document.createElement('script');
            rmist.type = 'text/javascript';
            rmist.async = true;
            rmist.src = '../js/dist/bundle.js?' + cb;
            var x = document.getElementsByTagName('script')[0];
            x.parentNode.insertBefore(rmist, x);
        }
        if (window.attachEvent)
            window.attachEvent('onload', async_load);
        else
            window.addEventListener('load', async_load, false);
    }());
    </script>
</head>
<body>

    <h1>Handlebars Test</h1>

    <p id="main-content">
        There will be a dynamic element added after this paragraph.
    </p>
    <p id="dynamic-content"></p>

</body>
</html>

Handlebars templates will be in /templates/*.hbs . Here’s an example, i’m calling /templates/hellotemplate.hbs:

<div class="hello" style="border: 1px solid red;">
    <h1>{{title}}</h1>
    <div class="body">
        Hello, {{name}}! I'm a template. 
    </div>
</div>

Javascript will be in the /js/app/app.js and the other libraries

Here, i’m taking direction from https://github.com/wycats/handlebars.js#precompiling-templates

gulp-handlebars handles the precompilation. We will run the ‘gulp’ build process to precompile hbs templates to js later.

The app.js code will need to render the precompiled template with the data object, and add to the DOM somehow (using jQuery in this case).

"use strict";
var data = { title: 'This Form', name: 'Joey' };
var html = MyApp.templates.hellotemplate(data);
// console.log(html);

$(document).ready(function () {
    $('#dynamic-content').html(html);
});

Precompiling the templates and using them

I will modify the gulpfile.js to add a task for template compilation. This is my final version:

var gulp = require('gulp');
var uglify = require('gulp-uglify');
var concat = require('gulp-concat');

gulp.task('default', ['templates','scripts'], function () {

});

var handlebars = require('gulp-handlebars');
var wrap = require('gulp-wrap');
var declare = require('gulp-declare');
var concat = require('gulp-concat');

gulp.task('templates', function () {
    gulp.src('templates/*.hbs')
      .pipe(handlebars())
      .pipe(wrap('Handlebars.template(<%= contents %>)'))
      .pipe(declare({
          namespace: 'MyApp.templates',
          noRedeclare: true, // Avoid duplicate declarations
      }))
      .pipe(concat('templates.js'))
      .pipe(gulp.dest('js/dist'));
});

gulp.task('scripts', function () {
    return gulp.src(['js/lib/jquery/dist/jquery.js', 'js/lib/handlebars/handlebars.runtime.js', 'js/dist/templates.js', 'js/app/**/*.js'])
      .pipe(concat('bundle.js'))
      .pipe(uglify())
      .pipe(gulp.dest('js/dist/'));
});

The key section is the ‘templates’ task. Translating:

  • read all *.hbs templates
  • process thru handlebars() precomilation
  • setting the namespace MyApp.templates
  • output to a single JS file js/dist/templates.js

The scripts task combines all the JS files to one bundle.js. However, i had some trouble debugging the code, so i first ran the JS without a bundle. I changed the html to use traditional javascript references instead of the bundle.js:

<head>
    <title>Handlebars Test</title>
    <script src="../js/lib/jquery/dist/jquery.min.js"></script>
    <script src="../js/lib/handlebars/handlebars.runtime.min.js"></script>
    <script src="../js/dist/templates.js"></script>
    <script src="../js/app/app.js"></script>
</head>

The load order is important– libraries, then templates, then the main app code. After fixing bugs, i get the desired HTML output in page:

handlebars-test-html-nobundle

Note the multiple GET requests. But it functionally is working.

Run the bundled version

Now that the JS code runs the templates with jQuery OK, we can remove the multiple script references and switch to the single bundle.js script.

Don’t forget to execute the ‘gulp’ build again (on the command line or via visual studio). Looking at the gulp ‘script’ task, note the order of the bundling concatenation needs to be the same order as the <script> tag includes would be in the above example. Otherwise, things wlil get out of order.

gulp.task('scripts', function () {
    return gulp.src(['js/lib/jquery/dist/jquery.js', 'js/lib/handlebars/handlebars.runtime.js', 'js/dist/templates.js', 'js/app/**/*.js'])
      .pipe(concat('bundle.js'))
      .pipe(uglify())
      .pipe(gulp.dest('js/dist/'));
});

Run the ‘gulp’ task again to build, via the command line or via the VS Task Runner Explorer, or the post-build event (i haven’t yet learned to use ‘watch’ to auto-rebuild). Don’t forget to change the HTML back to load /bundle.js instead of the multiple JS files.

Running/reloading the page, we finally get the precompiled templates inserted into the html page, via the single bundle.js:

handlebars-test-html-bundle

This stuff gets a bit crazy! Why do this? I guess i want to compile, process, and optimize my javascript code.

The project code is here: https://github.com/nohea/Enehana.CodeSamples/tree/master/HandlebarsTest . Without the /node_modules directory, since the file paths are too long for Windows ZIP. To reconstitute the directory, cd to the project folder, and run:

cd HandlebarsTest\HandlebarsTest
npm update

That will reconstitute the packages from the internet, since all the dependencies are listed in the package.json file. It also allows you to easily exclude the node_modules directory from version control, and allow other developers to ‘npm update’ to populate their own directories.

Using Bower for JS package management instead of NPM

This is a follow up to my post: Learning Gulp with Visual Studio – the JavaScript Task Runner

In my last post, i did the following:

  • Created a web project
  • Installed Node.js
  • Installed Gulp for running JS build tasks
  • Installed more JS libraries using NPM (node package manager)
  • Created simple HTML+JS app w/jquery
  • Created gulp tasks to minify and bundle JS into main.js
  • run proof of concept
  • added gulp task to post-build event so it is run automatically

In this post, i will make some changes to the above, in that i will replace the JS package manager NPM with Bower.

Why? NPM works, but has some disadvantages vs. Bower. NPM packages all keep copies of their own dependencies. This can result in multiple versions of libraries, like JQuery, even different versions. Bower uses a ‘flat’ model, so only one version is installed at a time.

Well, what about just using NuGet? That is possible too, but NuGet is not good at updating dependent versions after initial installation. If you install different NuGet packages with the same dependency, the “last one wins”.

To be completly frank, i think it’s kind of crazy to have 2 different JS package managers in the same project. But Bower does solve the problem of having a single “flat” package dependencies. I found some hacks/workarounds to do it in NPM, but i just can’t see making the brain investment unless i do node all day every day.

Installing Bower

Open the Node Command Prompt.

We are going to install it using NPM, globally:

npm install -g bower

At that point, we can install libraries using bower, similar to how NPM works.

You also need to install msysgit , but we already installed it when installing Git Extensions.

Bower uses Git to install packages.

Configuring  / .bowerrc

Packages will install to the bower_components/ directory. If you want to change the defaults, create a .bowerrc directory and set it.

If you want to change that, create a file in your project named .bowerrc and enter the JSON to specify:

{
    "directory": "js/lib"
}

The packages will get installed there instead.

I found trouble creating the file with a leading dot in Windows Explorer or Visual Studio. However, using Notepad, you can save a file with a leading dot.

Installing Gulp to the project, as a development tool

First we create a package.json file for our project.

npm init

We are going to use the Gulp task build tool, but that is not a client-side library. It is used as a development build-time tool, so we are installing it using npm.

npm install gulp --save-dev

We also need to install the gulp-* plugins we need at build time:

npm install gulp-uglify --save-dev
npm install gulp-concat --save-dev

Those commands create entries in your package.json file, under the key devDependencies. That means they can be automatically installed when the project is built on another machine.

Installing client-side packages

The whole point of Bower is to use it for client-side packages like JQuery, Angular, etc.

First, in the node command prompt, we will initialize it by creating a bower.json file. You can do it manually, or do the init command, and fill out the questions.

Here is what i got:

{
  "name": "GulpBowerWebTest",
  "version": "0.0.1",
  "authors": [
    "raulg <raulg@xxxxx.yyy>"
  ],
  "description": "Test using Bower",
  "license": "Proprietary",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "js/lib",
    "test",
    "tests"
  ]
}

Now i’m going to install some libs, like jquery (specific version 1.9.x).

bower-install-jquery-commandprompt-2014-10-30

Note in Solution Explorer (w/”Show all Files” on) they installed to /js/lib/, as specified in our .bowerrc

bower-install-jquery-sol-expl-2014-10-30

Updating Gulpfile.js

I copied the static index.html and app.js files from my prior project. I’m using the same Gulpfile.js, with only some updates to the paths, since they are in different locations:

var gulp = require('gulp');
var uglify = require('gulp-uglify');
var concat = require('gulp-concat');

gulp.task('default', ['scripts'], function() {

});

gulp.task('scripts', function() {
    return gulp.src(['js/lib/jquery/jquery.js', 'scripts/**/*.js'])
      .pipe(concat('main.js'))
      .pipe(uglify())
      .pipe(gulp.dest('js/dist/'));
});

I’m trying out using Task Runner Explorer to run gulp. After tweaking and testing, i’m going to try checking the After Build event. This could be an alternative to setting it in the project’s post-build event. I’m not sure if its better yet, since i’ll need to have it work when building on the TeamCity CI server using MSBuild. Some VS tooling are not supported in MSBuild.

bower-taskrunner-afterbuildevent-2014-10-30

Now when i run the app, it works the same. The differences are that jQuery is at 1.9 version, and it should be easier to manage the client-side JS libs we use, and keep versions up to date and consistent (using the bower update command). I think that means jquery 1.9.x gets the latest, but will not go to 2.x if specified in the bower.json file.

References:

 

Learning Gulp with Visual Studio – the JavaScript Task Runner

As I start looking into building more high-performance web apps, we are lead into the area of Javascript and CSS bundling and minification. I know my “old-school” Javascript coding, but in recent years, there’s been a huge movement in the JS community regarding the whole toolchain, so i’m jumping in here.

There is a Microsoft ASP.NET way to do bundling now, as well as the ServiceStack Bundler project, which uses node.js . However, that also has some dependency on ASP.NET MVC code.

Since most of the development in this area has been built in the JavaScript / HTML / CSS community, the most mature tools are there. So i’m going to do a documented test of the tools in use. In recent years, i’ve done most web development in Visual Studio w/C#, Javascript, HTML, CSS. But i do have a background in professional Perl web development (years ago), so i have a different perspective. I’m coming at the new front-end JS toolsets from a point of discovery, so this may be most useful if you are also new to it. Don’t treat this as a “how to do it the best way” article.

Grunt is a “JavaScript Task Runner”, which can be thought of as a build tool for JS code. It uses node.js for executing tasks.

Gulp is another JavaScript Task Runner. It is in the same role as Grunt, but works with a JS code function instead of a JSON config. Also, it uses Node Streams instead of temp files, and does not require a ‘plugin’ per library. I was going to write a Grunt how-to, but i changed my mind and will do Gulp.

We want some kind of task runner, since we need to:

  • read all the JS library files and ‘minify’ them to take up the least space possible
  • bundle the files into one JS file, to reduce the number of HTTP requests the client needs to make

Thus, static files will not work. The task runner need to run during design time, and probably build/deploy time as well.

Installing the Toolset

First to install the toolset on Windows, the FAQ has recommended:

  • Installing msysgit (which will be installed if you have my favorite Git Extensions installed)
  • Installing node for windows
  • using Command Prompt or Powershell as a command line (i use command prompt here)

Then we can figure out later how to make it easier to use in Visual Studio and MSBuild.

OK, i installed Node and npm, the Node.js package manager. Think of node of being it’s own platform with its own infrastructure, its own EXE, and its own set of installable packages. NPM is how you install packages.

Installing Grunt (via NPM)

According to the getting started page, we install grunt-cli via NPM.

Run the “Node.js command prompt” as Administrator by right-clicking it in the start menu. Note: this is NOT the green “Node.js” shortcut, which will not work. Then in the prompt, type:

npm install -g grunt-cli

You will see it download and install. But never mind that/skip it, cause i just changed my mind (Javascript fashion changes rapidly – just hang on for the ride, and just make sure you know what problem a tool does before you try to use it). I like the gulpfile code syntax better that the grunt json format, and i heard it builds faster too.

Installing Gulp (via NPM)

Now that i changed my mind, here’s how we can install Gulp via NPM: (from the Getting Started)

npm install --global gulp

learninggulp-npm-gulp

Seems to have installed some dependent libs i know nothing about. No prob.

This is a global install on your machine. It seems you will also have to install it per project using npm “devDependiencies” with the –save-dev flag. More on that later. Global installs are for command-line utilities. If you are using client-side libs, you install them or require() them in your project.

Creating a New Web Project using Gulp

You can do this with no IDE by creating an empty directory and start there. But since my team uses Visual Studio, i will create an empty ASP.NET web app and install there manually.

In VS 2013, Add New Projects, ASP.NET , name it, and select the “Empty” template. That will create the minimal project.

For kicks, add a static HTML file for testing later. Right-click the project, Add -> HTML page. Call it index.html.

Installing Gulp to the project

In the Node command prompt (doesn’t have to be Administrator mode), cd to your project directory (not the solution directory).

npm install gulp --save-dev

learninggulp-npm-gulp-save-dev

This will install the Node infrastructure to the project as a /node_modules/ directory.

I recommend clicking the “View All Files” button in Solution Explorer and also clicking the “Refresh” button.

learninggulp-solexp-showfiles

This will show you the files not tracked by VS, but in the directory.

Create the minimal gulpfile.js

Right-click the project, and add new text/js file called gulpfile.js . The minimal file will contain:

var gulp = require('gulp');

gulp.task('default', function() {
  // place code for your default task here
});

Now on the command line, you can run the default ‘gulp’ command, which will do nothing.

learninggulp-run-default-task

Installing other JS libraries for use (via NPM)

The point of using a JS build tool is including other JS libraries in your project, for use at build time or run time. So for the sake of proof-of-concept, i will install the uglify lib (for minification), concat to bundle all js scripts, and the JQuery lib (for use in our client-side scripts).

There is a special gulp-uglify plugin (a bunch of others too), so we install that in the same way with npm.

npm install gulp-uglify --save-dev

concat also has a gulp plugin:

npm install gulp-concat --save-dev

I will install the standard JQuery lib as well. Note i can use NPM to install it, or i could have it Microsoft-style and install using NuGet. The only difference would be the path to the *.js files in the project.

npm install jquery --save-dev

jQuery installs to: /node_modules/jquery/dist/jquery.js

Create a primitive “real” app

I’m going to create a /scripts/app.js file which does a simple jQuery DOM manipulation.

// my app
$(document).ready(function () {
    $('#message-area').html("jQuery changed this.");
});

Also, the index.html file will reference/run the app.js and jquery scripts.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Gulp Web Test</title>
    <script src="node_modules/jquery/dist/jquery.min.js"></script>
    <script src="scripts/app.js"></script>
</head>
<body>
    <h1>Gulp Web Test</h1>
    <div id="message-area">Static DIV content.</div>
</body>
</html>

When you execute this traditional, static version of the app, it will run as expected:

learninggulp-jquerychangedthis-static

Starting to put it together

Now we configure the gulp file to run our 3 tasks together:

  1. minify our JS scripts
  2. concat them into one JS file

We need to add this to the gulpfile.js:

var gulp = require('gulp');
var uglify = require('gulp-uglify');
var concat = require('gulp-concat');

gulp.task('default', ['scripts'], function () {

});

gulp.task('scripts', function () {
    return gulp.src(['node_modules/jquery/dist/jquery.js', 'scripts/**/*.js'])
      .pipe(concat('main.js'))
      .pipe(gulp.dest('dist/js'))
      .pipe(uglify())
      .pipe(gulp.dest('dist/js'));
});

 

Note i’ve created a new ‘scripts’ task, which is added as a dependent task to the ‘default’ task. The ‘scripts’ task is using the jquery.js file and all the *.js files in the /scripts/ directory as sources. They all go thru concat(), output to main.js , in the dist/js/ directory. They then go thru uglify().

Next,we run the ‘gulp’ command on the node command line. After a couple back and forth errors and corrections, we get this:

learninggulp-gulp-cmd-final

In Solution Explorer, you can refresh and now see the /dist/js/main.js file which was created.

learninggulp-output-main-js

It should contain our custom js code as well as the whole jQuery.

Then we can update the HTML reference to the new output bundle.js file, and see if it runs the same way. Delete the script tags for jquery.js and app.js, and add a single one for main.js

<script src="dist/js/main.js"></script>

When you run the same index.html in the browser, you should get the same “jQuery changed this.” output, even though the only js file is ‘main.js’. The output main.js is only 83K. I’m sure it could get smaller if we use gzip, etc. But it proves the concept works. It should be very easy to add other JS modules as needed.

The downside is installing this stuff to the project added 2,000 files under /node_modules/, adding about 12MB.

Visual Studio and MSBuild Integration

I did find some info on how to run Gulp and Grunt from withing VS as a post-build command, and hopefully in MSBuild as well:

For Gulp, we can just add a post-build step in the project –

  • right-click the project -> Properties…
  • click “Build Events”…
  • to the “Post-build event command-line:” add the following:
cd $(ProjectDir)
gulp

That will run the ‘gulp’ command via VS when you ‘build’, instead of having to use the command line. Much more convenient. You can delete the main.js file, then ‘build’ again – it will regenerate. Reference: Running Grunt from Visual Studio post build event command line
http://stackoverflow.com/questions/17256934/running-grunt-from-visual-studio-post-build-event-command-line .

Possibly much more full-featured and useful is the “Task Runner Explorer” VSIX extension. This is basically “real” tooling support in VS. I haven’t tried it yet, but i expect to try it.

Code for this post can be found here: https://github.com/nohea/Enehana.CodeSamples/tree/master/GulpWebTest

Update: I installed the Task Runner Explorer per the article above. It does work to view/run targets in the Gulpfile.js, so you don’t have to run on the command line, or have to build to execute the tasks.

learninggulp-task-runner-explorer

Update 2: i have a follow-up post: Using Bower for JS package management instead of NPM

Resources

 

ServiceStack CSV serializer with custom filenames

One of ServiceStack’s benefits is having one service method endpoint output to all supported serializers. The exact same code will output formats for JSON, XML, CSV, and even HTML. If you are motivated, you are also free to add your own.

Now in the case of CSV output, the web browser handling the download will prompt the user to save the text/csv stream as a file. The ‘File Save’ dialog will fill in the name of the file, if it is included in the HTTP response this way:

todos-service-opening-dialog

Note the filename is “Todos.csv”, because the request operation name is “Todos”. (i’m using the example service code).

There could be many cases where you would like to have much more fine-grained control of the default filename. However, you don’t want to pollute the Response DTO, since that would ruin the generic “any format” nature of the framework. You’ll probably also want to be able to have different filename-creation logic per-service, since you’ll often have many services in one application.

In my attempt to get to the bottom of this,

  • I create a new blank ASP.NET project. The version i want is the 3.9.* version, since i’m not up on the v4 stuff.
  • Using this site, i can identify the correct version of the NuGet package, and install the correct ones: https://www.nuget.org/packages/ServiceStack.Host.AspNet/3.9.71
  • Then i install from the console.
    PM> Install-Package ServiceStack.Host.AspNet -Version 3.9.71
  • I see all my references are 3.9.71
  • My web.config has the ServiceSTack handlers installed, and my project has the App_Start\AppHost.cs

The demo project is the ToDo list. I’ll use it to test the CSV output. First, add a few items:

todos-ux

Then try to get the service ‘raw’:
http://localhost:49171/todos

You will see the generic ServiceStack output page:

todos-service-html

Next, click the ‘csv’ link on the top right, in order to get the service with a ‘text/csv’ format. You will get the prompt dialog, as shown at the top of this post, with the ‘Todos.csv’ filename.

if you inspect the HTTP traffic in Fiddler, the request is:

GET /todos?format=csv HTTP/1.1

the response of looks like this:

HTTP/1.1 200 OK
 Cache-Control: private
 Content-Type: text/csv
 Vary: Accept-Encoding
 Server: Microsoft-IIS/8.0
 Content-Disposition: attachment;filename=Todos.csv
 X-Powered-By: ServiceStack/3.971 Win32NT/.NET
 X-AspNet-Version: 4.0.30319
 X-SourceFiles: =?UTF-8?B?QzpcVXNlcnNccmF1bGdcRG9jdW1lbnRzXGVuZWhhbmFcY29kZVxFbmVoYW5hLkNvZGVTYW1wbGVzXFNzQ3N2RmlsZW5hbWVcdG9kb3M=?=
 X-Powered-By: ASP.NET
 Date: Tue, 11 Mar 2014 01:43:52 GMT
 Content-Length: 88
Id,Content,Order,Done
 1,Get bread,1,False
 2,make lunch,2,False
 3,do launtry,3,False

The Content-Disposition: header defines the default filename of the save dialog box.

So how is this set? ServiceStack’s CSV serializer code sets is explicitly.

The best way i’ve discovered to do this is to plug in your own alternative CsvFormat plugin. If you view the source code, you’ll see where it sets the Content-Disposition: header in the HTTP Response.

    //Add a response filter to add a 'Content-Disposition' header so browsers treat it natively as a .csv file
    appHost.ResponseFilters.Add((req, res, dto) =>
    {
	    if (req.ResponseContentType == ContentType.Csv)
	    {
		    res.AddHeader(HttpHeaders.ContentDisposition,
			    string.Format("attachment;filename={0}.csv", req.OperationName));
	    }
    });

The docs for ServiceStack’s CSV Format are clear on it:

https://github.com/ServiceStackV3/ServiceStackV3/wiki/ServiceStack-CSV-Format

A ContentTypeFilter is registered for ‘text/csv’, and it is implemented by ServiceStack.Text.CsvSerializer.

Additionally, a ResponseFilter is added, which adds a Response header. Note the Content-Disposition: header is explicitly using the Request ‘OperationName’ as the filename. Normally this will be the Request DTO, which in this case is named ‘Todos’.

res.AddHeader(HttpHeaders.ContentDisposition, 
        string.Format("attachment;filename={0}.csv", req.OperationName));

So, what if we want to replace the default registration with different logic for setting the filename? We won’t need to change the registered serializer (still want the default CSV), but we should remove the ResponseFilter and add it in a slightly different way.

If you want to remove both, you can remove the Feature.Csv. However, in this case i just want to change the filter. I had trouble altering the response filter directly, so instead i created my own ‘CsvFilenameFormat’, which looks almost exactly like ‘CsvFormat’. The difference is that i try to get a custom filename from the service code, by looking in Request.Items Dictionary<string, object>.

The differing code in CsvFilenameFormat.Register():

    //Add a response filter to add a 'Content-Disposition' header so browsers treat it natively as a .csv file
    appHost.ResponseFilters.Add((req, res, dto) =>
    {
        if (req.ResponseContentType == ContentType.Csv)
        {
            string csvFilename = req.OperationName;

            // look for custom csv-filename set from Service code
            if( req.GetItemStringValue("csv-filename") != default(string) )
            {
                csvFilename= req.GetItemStringValue("csv-filename");
            }

            res.AddHeader(HttpHeaders.ContentDisposition, string.Format("attachment;filename={0}.csv", csvFilename));
        }
    });

So if the service code sets a custom value, it will be used by the text/csv response for the filename. Otherwise, use the default.

In the service:

        public object Get(Todos request)
        {
            // set custom filename logic here, to be read later in the response filter on text/csv response
            this.Request.SetItem("csv-filename", "customfilename");

So the mechanism is set up, all we need to do is properly prevent the default Csv ResponseFilter and use our own instead.

In AppHost Configure(), add a line to remove the Csv plugin, and one to install our replacement:

            // clear 
            this.Plugins.RemoveAll(x => x is CsvFormat);

            // install custom CSV
            Plugins.Add(new CsvFilenameFormat());

At this point, everything is in place, and we can re-run our web app:

todos-service-opening-dialog-customfilename

Project code here.

That’s the show. Thanks.

Customizing IAuthProvider for ServiceStack.net – Step by Step

Introduction

Recently, i started developing my first ServiceStack.net web service. As part of it, i found a need to add authentication to the service. Since my web service is connecting to a legacy application with its own custom user accounts, authentication, and authorization (roles), i decided to use the ServiceStack Auth model, and implement a custom IAuthProvider.

Oh yeah, the target audience for this post:

  • C# / .NET / Mono web developer who is getting started learning how to build a RESTful web api using ServiceStack.net framework
  • Wants to add the web API to an existing application with its own proprietary authentication/authorization logic

I tried to dive in and implement in my app, but i got something wrong with the routing to the /auth/{provider} , so i decided to take a step back and do the simplest thing possible, just so i understood the whole process.That’s what i’m going to do today.

I’m using Visual Studio 2012 Professional, but you could also use VS 2010, probably VS 2012 Express as well (or MonoDevelop, that’s another story i haven’t tried).

The simplest thing possible in my mind:

This is not an example of TDD-style development — more of a technology exploration.

OK, let’s get started.

Creating HelloWorld

I’m not going to repeat what’s already in the standard ServiceStack.net docs, but the summary is:

  • create an “ASP.NET Empty Web Application” (calling mine SSHelloWorldAuth)
  • pull in ServiceStack assemblies via NuGet (not my usual practice, but its easy). In fact, i’m using the “Starter ASP.NET Website Template – ServiceStack”. That will install all the assemblies and create references, and also update Global.asa
  • Create the Hello , HelloRequest, HelloResponse, and HelloService classes, just like the sample. Scratch that – it is already defined in the template at App_Start/WebServiceExamples.cs
  • Run the app locally. You will see the “ToDo” app loaded and working in the default.htm. Also, you can test the Hello function at http://localhost:65227/hello (your port number may vary)

 

Adding a built-in authentication provider

OK that was the easy part. Now we’re going to add the [Authenticate] attribute to the HelloService class.

[Authenticate]
public class HelloService : Service
{  ...

This will prevent the service from executing unless the session is authenticated already. In this case, it will fail, since nothing is set up.

Enabling Authentication

Now looking in App_Start/AppHost.cs , i found an interesting section:

		/* Uncomment to enable ServiceStack Authentication and CustomUserSession
		private void ConfigureAuth(Funq.Container container)
		{
			var appSettings = new AppSettings();

			//Default route: /auth/{provider}
			Plugins.Add(new AuthFeature(this, () => new CustomUserSession(),
				new IAuthProvider[] {
					new CredentialsAuthProvider(appSettings), 
					new FacebookAuthProvider(appSettings), 
					new TwitterAuthProvider(appSettings), 
					new BasicAuthProvider(appSettings), 
				})); 

			//Default route: /register
			Plugins.Add(new RegistrationFeature()); 

			//Requires ConnectionString configured in Web.Config
			var connectionString = ConfigurationManager.ConnectionStrings["AppDb"].ConnectionString;
			container.Register<IDbConnectionFactory>(c =>
				new OrmLiteConnectionFactory(connectionString, SqlServerDialect.Provider));

			container.Register<IUserAuthRepository>(c =>
				new OrmLiteAuthRepository(c.Resolve<IDbConnectionFactory>()));

			var authRepo = (OrmLiteAuthRepository)container.Resolve<IUserAuthRepository>();
			authRepo.CreateMissingTables();
		}
		*/

Let’s use it. But i want to just enable CredentialsAuthProvider, since that is a forms-based username/password authentication, (the closest to what i want to do customized).

A few notes on the code block above:

The “Plugins.Add(new AuthFeature(() ” stuff was documented.

“Plugins.Add(new RegistrationFeature());” was new to me, but now i see it is to add the /register route and behavior
For this test, i will go along with using the OrmLite for the authentication tables. In order to do that,

  • i’m using a new connection string “SSHelloWorldAuth”,
  • adding it to Web.config: <connectionStrings><add name=”SSHelloWorldAuth” connectionString=”Data Source=.\SQLEXPRESS;Initial Catalog=SSHelloWorldAuth;Integrated Security=SSPI;” providerName=”System.Data.SqlClient” /></connectionStrings>
  • creating a new SQLEXPRESS database locally, called: SSHelloWorldAuth

Finally, we’ll have to add/enable the line to ConfigureAuth(container) , which will initialize the authentication system.

Now we’ll try running the app again: F5 and go to http://localhost:65227/hello in the browser again. I get a new problem:

In a way, it’s good, because the [Authenticate] attribute on the HelloService class worked – the resource was found, but sent a redirect to /login . However, no handler is set up for /login.

Separately, i checked if the OrmLite db got initialized with authRepo.CreateMissingTables(); , and it seems it did (2 tables created).

Understanding /login , /auth/{provider}

This is where i got hung up on my initial try to get it working, so i’m especially determined to get this working.

The only example of a /login implementation i found in the ServiceStack source code tests. It seems like /login would be for a user to enter in a form. It seems if you are a script (javascript or web api client), you would authenticate at the /auth/{provider} URI.

That’s when i thought – is the /auth/* service set up properly? Let’s try going to http://localhost:65227/auth/credentials

So the good news is that is is set up. Why don’t we try to authenticate against /auth/credentials ?

Well, first i should create a valid username/password combination. I can’t just insert into the db, since the password must be one-way hashed. So i’m going to use the provider itself to do that.

I copied a CreateUser() function in the ServiceStack unit tests, and will run in my app’s startup. I modified slightly to pass in the OrmLiteAuthRepository, and call it right after initializing the authRepo.

CreateUser(authRepo, 1, "testuser", null, "Test2#4%");

Run the app with F5 again, and then check the database: select * from userauth — we now have one row with username and hashed password. Suitable for testing. (don’t forget to disable CreateUser() now).

Authenticating with GET

I would never do this on my “real” application. At minimum, i would only expose a POST method. But instead of writing some javascript, i’m going to try the web browser to submit credentials and try to authenticate.

First, i’m going to try and use a wrong password:

http://localhost:65227/auth/credentials?UserName=testuser&Password=wrong

… i get the same “Invalid UserName or Password” error, which is good.

Now i’ll try the correct username/password (url-encoding left as an exercise for the reader):

http://localhost:65227/auth/credentials?UserName=testuser&Password=Test2%234%25

Success! This means my user id has a validated ServiceStack session on the server, and is associated with my web browser’s ss-id cookie.

I can now go to the /hello service on the same browser session, and it should work:

Awesome. So we’ve figured out the /auth/credentials before the /hello service. Just for kicks, i stopped running the app in Visual Studio and terminated my local IIS Express web server instance, in order to try a new session. When i ran the project again and went to /hello , it failed as expected (which we want). Only by authenticating first, do we access the resource.

IAuthProvider vs IUserAuthRepository

Note that i started this saying i wanted to implement my own IAuthProvider. However, ServiceStack also separately abstracts the IUserAuthRepository, which seems to be independently pluggable. Think of it this way:

  • IAuthProvider is the authentication service code backing the HTTP REST API for authentication
  • IUserAuthRepository is the provider’s .NET interface for accessing the underlying user/role data store (all operations)

Since my initial goal was to use username/password login with my own custom/legacy authentication rules, it seems more appropriate to use subclass CredentialsAuthProvider (creating my own AcmeCredentialsAuthProvider).

I do not expect to have to create my own IUserAuthRepository at this time– but it would be useful if i had to expose my custom datastore to be used by any IAuthProvider. If you are only supporting one provider, you can put the custom code into the provider’s TryAuthenticate() and OnAuthenticated() methods. With a legacy system, you probably already have tools to manage user accounts and roles, so you’re not likely to need to re-implement all the IUserAuthRepository methods. However, if you need to implement Roles, a custom implementation of IUserAuthRepository may be in order (to be revisited).

This is going to be almost directly from the Authentication and Authorization wiki docs.

  • Create a new class, AcmeCredentialsAuthProvider.cs
  • subclass CredentialsAuthProvider
  • override TryAuthenticate(), adding in your own custom code to authenticate username/password
  • override OnAuthenticated(), adding any additional data for the user to the session for use by the application
    public class AcmeCredentialsAuthProvider : CredentialsAuthProvider
    {
        public override bool TryAuthenticate(IServiceBase authService, string userName, string password)
        {
            //Add here your custom auth logic (database calls etc)
            //Return true if credentials are valid, otherwise false
            if (userName == "testuser" && password == "Test2#4%")
            {
                return true;
            }
            else
            {
                return false;
            }
        }

        public override void OnAuthenticated(IServiceBase authService, IAuthSession session, IOAuthTokens tokens, Dictionary<string, string> authInfo)
        {
            //Fill the IAuthSession with data which you want to retrieve in the app eg:
            session.FirstName = "some_firstname_from_db";
            //...

            //Important: You need to save the session!
            authService.SaveSession(session, SessionExpiry);
        }
    }

As you can see, i did it in a trivially stupid way, but any custom logic of your own will do.

Finally, we change AppHost.cs ConfigureAuth() to load our provider instead of the default.

			Plugins.Add(new AuthFeature(() => new CustomUserSession(),
				new IAuthProvider[] {
					new AcmeCredentialsAuthProvider(appSettings), 
				}));

Run the app again, you should get the same results as before passing the correct or invalid username/password. Except in this case, you can set a breakpoint and verify your AcmeCredentialsAuthProvider code is running.

So at the end of this i’m happy:

  • I established how to create a ServiceStack service with a working custom username/password authentication
  • I learned some things from the ServiceStack Nuget template which was in addition to the docs
  • I understand better where it is sufficient to only override CredentialsAuthProvider for IAuthProvider , and where it may be necessary to implement a custom IUserAuthRepository (probably to implement custom Roles and/or Permissions)

Thanks for your interest. If you are interested in the code/project file created with this post, i’ve pushed it to GitHub.