Building a basic REST AspNet Core Example: Hypermedia - Part 8

If you need to catch up on the previous posts then see part 1 & 2, 3, 4, 5, 6, 7. The source code for this post is at github.

First a disclaimer - this blog is a journey for me, I am no expert in hypermedia - rather a newbie trying it and documenting it here (hence the site name Code Playground). So whether or not I am all into hypermedia is to be determined.

Hypermedia

The third level of Richardson’s maturity model is about hypermedia. Oversimplified it is about also providing links in the response to actions you can take from the given state. HTML has hypermedia controls for this like the form, a, img tags. No-one really needs a manuel or documentation to use a web-site, and hypermedia can be seen as a way to make a protocol more self-documenting, make it less brittle to version changes etc.(if you provide all URI’s to the api then you can be locked in when attempting to change).

Notice Roy Fielding the man who coined REST is pretty strict on hypermedia saying you cannot call it REST without it. Read this post for more on that statement. So no doubt the REST term is misused by many API’s in the wild.

With Json it is more unclear how hypermedia controls should be provided. There is a range of options like

The “RESTFul Web APIs: Services for a Changing World” book looks at many of these and may be a read worth. You can also read this post by Kevin Sookocheff that covers these formats. There is more material to this subject than I can cover - and I do not claim to be that familiar with each of the standards. In this post I will look at the first two.

Json-LD - A JSON-based Serialization for Linked Data

This is a W3C recommendation. It builds on the concept of Linked Data that says 1) Use URIs as names for things, 2) use HTTP URIs so that people can lookup the names 3) when someone looks up a URI provide useful information & 4) include links to other URIs, so that people can discover more things.

You may read the Json-LD introduction to understand it further, but basically it provides eg. the ability to use links in your json and to describe the json document and its terms itself. It can be used on an existing API if you wish. There is a nice video here explaining the basics, and another one that continues on the first. Expansion and Compaction is explained here and is useful to watch to get the data exchange idea behind Json-LD.

Let’s look at how Json-LD can be used with our invoice representation. But first let’s look at the response for GET /invoices/1,

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"invoiceDate": "2016-12-19T00:00:00+01:00",
"dueDate": "2016-12-19T00:00:00+01:00",
"customer": {
"name": "Customer 1",
"addressLines": []
},
"subTotal": 20.0,
"lines": [
{
"lineNumber": 1,
"description": "Line 1",
"quantity": 2.0,
"itemPrice": 10.0,
"total": 20.0
}
],
"id": "1"
}

Now when using the property name invoiceDate what exactly do we mean. Json-LD aims high and goes for a common understanding between different web sites of what we mean, so it tries to solve the ambiguity of what that term means. To be specific the property could be specified as an URI (or URL if it can be dereferenced)

json
1
2
3
4
{
"http://restexample.org/types/invoice#invoiceDate": "2016-12-19T00:00:00+01:00",
...
}

in this case it states we mean an invoice date as defined by restexample. And at the end of this url we provide documentation for this property. We could also have looked in schema.org and picked the URL from there - thus aiming to provide a more crosssite understanding the data, but even within our own company this may be useful. But to fit that understanding into your existing json structure we add a @context to the json. The context tells us how to understand the json data. It allows us to map between the property name (called terms) and the unique name (IRI), as shown here.

json
1
2
3
4
5
6
7
8
{
"@context": {
"invoiceDate": "http://restexample.org/types/invoice#invoiceDate"
...
}
invoiceDate: "2016-12-19T00:00:00+01:00",
...
}

The context does not have to go within your data as shown here. It can be pointed to using the Http Link header. You can also reference it like this

json
1
2
3
4
5
{
"@context": "http://restexample.org/invoice.jsonld"
"invoiceDate": "2016-12-19T00:00:00+01:00",
...
}

It also introduces the concept of global identifiers to identify the objects, that is to add links.

json
1
2
3
4
5
6
{
"@context": "http://restexample.org/invoice.jsonld"
"invoiceDate": "2016-12-19T00:00:00+01:00",
"@id": "http://restexample.org/invoice/1"
...
}

So eg. in the case of getting a list of invoices this can be useful with a link for each invoice. You can also specify the type of a property. Let’s say our invoice has a link to sales support

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"@context": {
"invoiceDate": "http://restexample.org/types/invoice#invoiceDate",
"salesSupport": {
"@id": "http://restexample.org/types/invoice#salesSupport",
"@type": "@id"
}
...
}
"invoiceDate": "2016-12-19T00:00:00+01:00",
"salesSupport": "http://restexample.org/salesSupport"
...
}

with the above context we can specify the property salesSupport is to be treated like a link. @type can more generally be used to inform about its type like number, datetime etc. Like shown here.

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"@context": {
"invoiceDate": {
"@id": "http://restexample.org/types/invoice#invoiceDate",
"@type": "https://schema.org/DateTime"
},
"dueDate": {
"@id": "http://restexample.org/types/invoice#dueDate",
"@type": "https://schema.org/DateTime"
},
"subTotal": {
"@id": "http://restexample.org/types/invoice#subTotal",
"@type": "https://schema.org/Number"
},
...
}
...
}

There’s much more to Json-Ld than shown here, but these are the basics. You can’t specify the actions you can take given the response. For this you can combine Json-Ld with Hydra. There’s a video on hydra here. Hydra calls these actions “operations”. So you could specify that a delete operation exists like this

json
1
2
3
4
5
6
7
8
9
10
11
{
"@context": "http://www.w3.org/ns/hydra/context.jsonld",
"@id": "http://restexample.org/invoice/1",
"operation": [
{
"@type": "DeleteResourceOperation",
"method": "DELETE"
}
],
...
}

That’s just a simple example - I won’t cover hydra here in detail.

AspNet Core

To be reasonable to add Json-LD to the API some tooling is required. If not then these definitions will just end up out of sync with the implementation. For Json-LD client side operation there is json-ld.net supporting the Json-LD 1.0 Processing Algoritms and API. There is also this one that is not .Net core. So based on a quick google tour there is not something that easily fits the purpose. Let me know if you have the solution.

HAL - JSON Hypertext Application Language

Taken from HAL’s specification:

HAL is a generic media type with which Web APIs can be developed and exposed as series of links. Clients of these APIs can select links by their link relation type and traverse them in order to progress through the application.

Basically it adds two reserved properties _links and _embedded JSON. It’s media type is “application/hal+json”. _links is defined as

It is an object whose property names are link relation types (as defined by [RFC5988]) and values are either a Link Object or an array of Link Objects. The subject resource of these links is the Resource Object of which the containing “_links” object is a property.

To get the understanding of link relation types you can read RFC 5988 section 4 that says,

In the simplest case, a link relation type identifies the semantics of a link. For example, a link with the relation type “copyright” indicates that the resource identified by the target IRI is a statement of the copyright terms applying to the current context IRI. Link relation types can also be used to indicate that the target resource has particular attributes, or exhibits particular behaviours; for example, a “service” link implies that the identified resource is part of a defined protocol (in this case, a service description).

So let’s see the link relation type as a a property name in HAL:

json
1
2
3
4
5
6
7
8
{
"_links": {
"self": {
"href": "/invoices"
}
}
...
}

“self” is the link relation property. “self” signifies that the URL in the value of href attribute identifies a resource equivalent to the containing element. IANA has a link relation registry here. So take a look at these before inventing your own.

_embedded is defined as

It is an object whose property names are link relation types (as defined by [RFC5988]) and values are either a Resource Object or an array of Resource Objects. Embedded Resources MAY be a full, partial, or inconsistent version of the representation served from the target URI.

The Linked object can have the following properties href, templated, type, deprecation, name, profile, title, hreflang. But let’s see a constructed example for GET /invoices/

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
{
"_links": {
"self": {
"href": "/invoices"
},
"next": {
"href": "/invoices?page=2"
}
},
"_embedded": {
"invoices": [
{
"_links": {
"self": {
"href": "/invoices/2"
}
},
"invoiceDate": "2016-12-19T00:00:00+01:00",
"dueDate": "2016-12-19T00:00:00+01:00",
"customer": {
"name": "Customer 1"
},
"subTotal": 20.0,
"id": "2"
},
{
"_links": {
"self": {
"href": "/invoices/2"
}
},
"invoiceDate": "2016-12-18T00:00:00+01:00",
"dueDate": "2016-12-18T00:00:00+01:00",
"customer": {
"name": "Customer 2"
},
"subTotal": 130.0,
"id": "1"
}
]
}
}

Here we have a “next” link relation that takes us to be next subset of invoices. _embedded contains a partial representation of the invoice representation with a link to each invoice.

In the above example we are using link relation types registered with IANA. When you need to create your own then RFC 5988 defines “Extension Relations Types” says

Applications that don’t wish to register a relation type can use an extension relation type, which is a URI [RFC3986] that uniquely identifies the relation type. Although the URI can point to a resource that contains a definition of the semantics of the relation type, clients SHOULD NOT automatically access that resource to avoid overburdening its server.

So http://restexample.org/relationTypes/foo could be our own relation type. HAL tells us to provide links - that can be dereferenced in a web browser - that provides documentation. And we can use the CURIE to shorten these links. Here’s an example

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"_links": {
"self": {
"href": "/orders"
},
"curies": [
{
"name": "acme",
"href": "http://docs.acme.com/relations/{rel}",
"templated": true
}
],
"acme:widgets": {
"href": "/widgets"
}
}
}

so acme:widgets unfolds to http://docs.acme.com/relations/widgets.

AspNet Core

We can find a list of libraries for HAL here. Most seems out of date (at the time of writing this post) so the choice we have is Halcyon. Basically it provides us with a way to return HAL without changing our existing model using an extension method on the controller.

1
2
3
4
return HAL(model, new Link[] {
new Link("self", "/api/foo/{id}"),
new Link("foo:bar", "/api/foo/{id}/bar")
});

So let’s try Halcyon out on the InvoiceController. First change is the GET /invoices/{id}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[HttpGet("{id}", Name = "GetInvoice")]
public IActionResult Get(string id)
{
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
var currentETag = new EntityTagHeaderValue($"\"{invoice.Version}\"");
if (IfMatchGivenIfNoneMatch(currentETag))
{
return StatusCode((int)HttpStatusCode.NotModified);
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = currentETag;
return this.HAL(getInvoiceMapper.ToModel(invoice),
new Link("self", Request.Path));
}
});

the HAL extension is supplied with our model, and the self link (it will add it by itself if not specified, but with a non-standard method property).

The response for /invoices/1 now looks like this:

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"InvoiceDate": "2016-12-21T00:00:00+01:00",
"DueDate": "2016-12-21T00:00:00+01:00",
"Customer": {
"Name": "Customer 1",
"AddressLines": []
},
"SubTotal": 20.0,
"Lines": [
{
"LineNumber": 1,
"Description": "Line 1",
"Quantity": 2.0,
"ItemPrice": 10.0,
"Total": 20.0
}
],
"Id": "1",
"_links": {
"self": {
"href": "/invoices/1"
}
}
}

so a _links section is added with a self reference. Next change is the GET /invoices/. Here we use the embedded extension method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[HttpGet]
public IActionResult Get()
{
var preferHeader = Request.Headers.Prefer();
if (SupportedRepresentations.Contains(preferHeader.Return))
{
Response.Headers.Add("Preference-Applied", "return=" + preferHeader.Return);
Response.Headers.Add("Vary", VaryHeaderValue);
}
var invoices = invoiceRepository.GetAll();
if (preferHeader.Return == ReturnMinimal)
{
return this.HAL<object, GetMinimalInvoice>(null,
new Link("self", Request.Path),
"invoices",
invoices.Select(invoice => getMinimalInvoiceMapper.ToModel(invoice)),
new Link("self", "/invoices/{Id}"));
}
return this.HAL<object, GetInvoice>(null,
new Link("self", Request.Path),
"invoices",
invoices.Select(invoice => getInvoiceMapper.ToModel(invoice)),
new Link("self", "/invoices/{Id}"));
}

Now we don’t really have any specific data for the collection itself - which is the reason for using null for the model as the first parameter for the HAL method. Then comes the collection link followed by the name of the embedded collection. Next is the embedded collection with its link. The link is specified as a template, that takes the Id from the invoice. Here is the GET

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
{
"_links": {
"self": {
"href": "/invoices/"
}
},
"_embedded": {
"invoices": [
{
"InvoiceDate": "2016-12-21T00:00:00+01:00",
"DueDate": "2016-12-21T00:00:00+01:00",
"Customer": {
"Name": "Customer 1",
"AddressLines": []
},
"SubTotal": 20.0,
"Lines": [
{
"LineNumber": 1,
"Description": "Line 1",
"Quantity": 2.0,
"ItemPrice": 10.0,
"Total": 20.0
}
],
"Id": "1",
"_links": {
"self": {
"href": "/invoices/1"
}
}
},
{
"InvoiceDate": "2016-12-21T00:00:00+01:00",
"DueDate": "2016-12-21T00:00:00+01:00",
"Customer": {
"Name": "Customer 2",
"AddressLines": []
},
"SubTotal": 20.0,
"Lines": [
{
"LineNumber": 1,
"Description": "Line 1",
"Quantity": 2.0,
"ItemPrice": 10.0,
"Total": 20.0
}
],
"Id": "2",
"_links": {
"self": {
"href": "/invoices/2"
}
}
}
]
}
}

Providing paging would now be a question of providing a “prev”, “next” link.

Wrap-up

Writing this post I realized that for a newbie to use hypermedia you have a rather large evaluation task to go though.

Building a basic REST AspNet Core Example: Caching - Part 7

If you need to catch up on the previous posts then see part 1 & 2, 3, 4, 5, 6. The source code for this post is at github.

This post will take a look at RFC 7234 which is about HTTP 1.1 caching. It is mainly about two headers Expires and Cache-Control. The first one is for server responses. The second one can be used both on requests and responses. Cache-Control has the highest precedence. If your API returns a GET (safe method) with a 200 response then it can be subject to caching. If you don’t specify anything in your GET response the cache may assign a heuristic expiration time. Other status codes where this can happen is

Responses with status codes that are defined as cacheable by default (e.g., 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, and 501 in this specification) can be reused by a cache with heuristic expiration unless otherwise indicated by the method definition or explicit cache controls [RFC7234]; all other status codes are not cacheable by default.

If you specify Last-Modified or ETag these are taken into account.

Expires Header

You can read about the Expires header here. Basically it allows you to specify a date after which the response is considered stale. Like this

Expires: Fri, 16 Dec 2016 16:00:00 GMT

Cache-Control Header

You can read about the Cache-Control header here. The directives you can use depends on if we are talking about the request or the response. So let’s shortly look at some of these

Request Cache-Control Directives

  • no-cache - Specify this directive if the client is not willing to get a cached response.

  • max-age - Specifying Cache-Control: max-age=5, means that the client is willing to accept a response that is up to 5 seconds old.

  • max-stale - Here the client is willing to accept a stale result, eg Cache-Control: max-stale=5.

  • min-fresh - Here you say that you want a result that will be fresh for at least the time specified.

  • no-transform - Says no intermediary may transform the payload.

  • only-if-cached - Can be used to explicitly go for a cached response. If not in the cache you will get a 504 (Gateway timeout).

Response Cache-Control Directives

Some of the response directives are

  • must-revalidate - Says that when the response is stale the cache must not use the response without successfull validation on the origin server. If it cannot reach the origin server it will return 504 (Gateway Timeout).

  • no-cache - Says that the response may not be stored without validating the subsequent requests.

  • no-store - Says that the response must not be stored in the cache.

  • public - Says that the response may be stored in either a shared or private (local) cache.

  • private - Says that the response may be stored in the user’s cache

  • max-age - Says that the response is stale after the specified number of seconds.

  • s-maxage - Says that the response is stale in a shared cache after the specified number of seconds.

Using Caching

You should use caching to provide scalability and decrease the load on your server. In the previous posts we made a FileController that can return files. This could potentially put a load on server we may want to minimize. You can read more about caching with aspnet core here. Basically we just have to use the ResponseCacheAttribute. Let’s try it out with curl and Nginx (or whatever your preferred tool might be). Here I’ve put nginx in front of the aspnet core app running on port 5000. So when hitting 8080 nginx will be the cache intermediate.

1
2
3
4
5
6
7
8
9
10
11
12
13
http {
proxy_cache_path /home/mikael/nginx-cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
listen 8080;
location / {
proxy_cache my_cache;
proxy_pass http://127.0.0.1:5000/;
add_header X-Cache-Status $upstream_cache_status;
}
}

Let’s try to get a file without any cache response header using

1
2
3
4
5
6
7
8
9
10
11
curl -I -X GET http://localhost:8080/files/1
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:18:11 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
X-Cache-Status: MISS
```

Notice the X-Cache-Status saying it is a cache MISS. Repeating the command will give the same result. In principle it could have stored it according to the RFC. Let’s try to add the attribute as shown here

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Route("files")]
public sealed class FilesController : Controller
{
private readonly IRepository<File> fileRepository;
public FilesController(IRepository<File> fileRepository)
{
this.fileRepository = fileRepository;
}
[HttpGet("{id}", Name = "GetFile")]
[ResponseCache(Duration = 60)]
public IActionResult Get(string id)
{

We get the following response.

1
2
3
4
5
6
7
8
9
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:24:02 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: public,max-age=60
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
X-Cache-Status: MISS

Saying that the response can be cached for 60 seconds in private + shared caches. Retrying it within the time limit and we get X-Cache-Status: HIT. After the 60 seconds we can see nginx makes a new request and returns X-Cache-Status: EXPIRED (make one more request and it will give a HIT).

Now let’s specify that only the client may cache the result.

1
2
3
[HttpGet("{id}", Name = "GetFile")]
[ResponseCache(Duration = 60, Location=ResponseCacheLocation.Client)]
public IActionResult Get(string id)

Now we will always get

1
2
3
4
5
6
7
8
9
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:33:44 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: private,max-age=60
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
X-Cache-Status: EXPIRED

or rather MISS as X-Cache-Status but the cache already has the entry from before. If we specify ResponseCacheLocation.None like below we must also specify the Duration

1
2
3
[HttpGet("{id}", Name = "GetFile")]
[ResponseCache(Location = ResponseCacheLocation.None, Duration = 60)]
public IActionResult Get(string id)
1
2
3
4
5
6
7
8
9
10
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:42:59 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-cache,max-age=60
Pragma: no-cache
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
X-Cache-Status: EXPIRED

If we do not want the result cached at all we can do it like this

1
2
3
[HttpGet("{id}", Name = "GetFile")]
[ResponseCache(NoStore = true)]
public IActionResult Get(string id)

and then we get

1
2
3
4
5
6
7
8
9
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:46:43 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-store
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
X-Cache-Status: EXPIRED

The VARY header is also supported

[ResponseCache(Location = ResponseCacheLocation.Any, Duration = 60, VaryByHeader = “Accept-Encoding”)]

1
2
3
4
5
6
7
8
9
10
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sat, 17 Dec 2016 15:59:08 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: public,max-age=60
Last-Modified: Sat, 10 Dec 2016 00:00:00 GMT
Vary: Accept-Encoding
X-Cache-Status: HIT

Using Cache Profiles

You probably don’t want to duplicate this attribute settings on many controllers. So instead you can define a profile in ConfigureServices in Startup.cs like shown here

1
2
3
4
5
6
7
8
9
10
11
12
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc(options =>
{
options.CacheProfiles.Add("Default", new CacheProfile()
{
Duration = 60,
Location = ResponseCacheLocation.Client
});
LimitFormattersToThisApplication(options);
});
...

in the FilesController we can then change the attribute to

1
2
3
[HttpGet("{id}", Name = "GetFile")]
[ResponseCache(CacheProfileName = "Default")]
public IActionResult Get(string id)

If you don’t want attributes or profiles

In this case then you can do everything using the ResponseHeaders, it has a CacheControl property and Expires.

Without an External Server

If you do not want to use an external server like nginx then AspNet itself has response caching middelware. The default you get is a memory cache. This may work for you depending on your use case.

To enable it you basically have to add the following to your Startup.cs file

1
2
3
4
5
6
7
8
9
10
public void ConfigureServices(IServiceCollection services)
{
services.AddResponseCaching();
}
public void Configure(IApplicationBuilder app)
{
app.UseResponseCaching();
...
}

Running the basic sample in the AspNet github repository and making two requests,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
HTTP/1.1 200 OK
Date: Sat, 17 Dec 2016 16:16:32 GMT
Server: Kestrel
Cache-Control: public, max-age=10
Transfer-Encoding: chunked
Vary: Accept-Encoding
Vary: Non-Existent
HTTP/1.1 200 OK
Date: Sat, 17 Dec 2016 16:16:32 GMT
Content-Length: 32
Server: Kestrel
Cache-Control: public, max-age=10
Age: 5
Vary: Accept-Encoding
Vary: Non-Existent

It’s the Age header that tell us we got caching in place.

Wrap-up

That’s all I wanted to mention about caching. I will continue this REST example in future posts.

Building a basic REST AspNet Core Example - Part 6

If you need to catch up on the previous posts then see part 1 & 2, 3, 4, 5. The source code for this post is at github.

In this post I will look at additional HTTP features. That is HTTP status 202 Accepted and the Retry-After header and see how this can be used with our FileController created in the last post.

HTTP Status 202 Accepted

You can read about 202 Accepted in in RFC 7231 section 6.3.3.

The 202 (Accepted) status code indicates that the request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.

This status code fits well with adding a new file using POST. We could perform a virus scan or do other custom processing before it becomes available. At the time of writing this post AcceptedAtRoute response is only added to the DEV branch of AspNet at GitHub. So we will do a hack adding it to our solution. Likewise IFormCollection is in a newer version than shown here. So here’s the implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpPost]
public IActionResult Post()
{
var file = Request.Form.Files.FirstOrDefault();
if (file == null)
{
return BadRequest();
}
File newFile;
using (var stream = new MemoryStream())
{
file.CopyTo(stream);
newFile = new File(stream.ToArray(), DateTimeOffset.Now, file.ContentType);
fileRepository.Create(newFile);
}
return new AcceptedAtRouteResult("GetFile", new { id = newFile.Id}, null);
}

You should imagine that fileRepository.Create spins off eg. a virus check.

Retry-After Header

You can read about the Retry-After header in RFC 7231 section 7.1.3

Servers send the “Retry-After” header field to indicate how long the user agent ought to wait before making a follow-up request. When sent with a 503 (Service Unavailable) response, Retry-After indicates how long the service is expected to be unavailable to the client.

Here we are going to use Retry-After in another way. That is with the GET request that we pointed to with the AcceptedAtRouteResult. I can imagine this behaviour is debateable, so don’t go do it without considering the consequences. But in this case when then file isn’t available yet we will return No Content status along with the Retry-After header.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public IActionResult Get(string id)
{
var file = fileRepository.Get(id);
if (file == null)
{
return NotFound();
}
var requestHeaders = Request.GetTypedHeaders();
if (requestHeaders.IfNoneMatch == null &&
requestHeaders.IfModifiedSince.HasValue
&& requestHeaders.IfModifiedSince.Value >= file.LastModified)
{
return StatusCode(StatusCodes.Status304NotModified);
}
if (!file.IsAvailable)
{
Response.Headers.Add(HeaderNames.RetryAfter, "60");
return NoContent();
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.LastModified = file.LastModified;
return File(file.Content, file.ContentType);
}

I have introduced a property IsAvailable on the File class to determine if a No Content along with a Retry-After header should be returned. Now when we make the request to a newly upload file we get:

1
2
3
4
HTTP/1.1 204 No Content
Date: Fri, 16 Dec 2016 10:08:26 GMT
Retry-After: 60
Server: Kestrel

that says we should retry after 60 seconds.

Wrap-up

That’s it for now. But the REST story will continue in other posts.

Building a basic REST AspNet Core Example - Part 5

If you need to catch up on the previous posts then see part 1 & 2, 3, 4. The source code for this post is at github.

In this post I will look at additional HTTP features. We will revisit the case of when a resource was modified. We looked at ETag previously. In this post we will tackle the case where we want to go with a date for comparison instead. This should be done if generation of an ETag does not make sense for you.

Last-Modified, If-Modified-Since, If-Unmodified-Since

When we looked at ETag in the InvoiceController we did not provide a Last-Modified response header. In the case of an invoice we would probably in real life also keep track of modification and thus we should provide this header according to RFC 7232

1
2
3
4
5
6
200 (OK) responses to GET or HEAD, an origin server:
o SHOULD send an entity-tag validator unless it is not feasible to
generate one.
...
o SHOULD send a Last-Modified value if it is feasible to send one.

Here we will look at a limited FilesController where files can be uploaded, and retrieved. I will just make enough code to highlight the HTTP features. We will add a simplified File class to the domain

1
2
3
4
5
6
7
8
9
10
11
12
13
public sealed class File
{
public File(byte[] content, DateTimeOffset lastModified, string contentType)
{
Content = content;
LastModified = lastModified;
ContentType = contentType;
}
public byte[] Content { get; }
public DateTimeOffset LastModified { get; }
public string ContentType { get; }
}

and provide a FileRepository for it (not shown here). Now we could make a simple GET implementation like this that return the LastModified header

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[Route("files")]
public sealed class FilesController : Controller
{
private readonly IRepository<File> fileRepository;
public FilesController(IRepository<File> fileRepository)
{
this.fileRepository = fileRepository;
}
[HttpGet("{id}")]
public IActionResult Get(string id)
{
var file = fileRepository.Get(id);
if (file == null)
{
return NotFound();
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.LastModified = file.LastModified;
return File(file.Content, file.ContentType);
}
}

Let’s extend this to handle If-Modified-Since.

If-Modified-Since

We have to live up to the following in RFC 7232 section 3.3:

A recipient MUST ignore If-Modified-Since if the request contains an If-None-Match header field;
A recipient MUST ignore the If-Modified-Since header field if the received field-value is not a valid HTTP-date, or if the request method is neither GET nor HEAD.

This can be fullfilled by this implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[HttpGet("{id}")]
public IActionResult Get(string id)
{
var file = fileRepository.Get(id);
if (file == null)
{
return NotFound();
}
var requestHeaders = Request.GetTypedHeaders();
if (requestHeaders.IfNoneMatch == null &&
requestHeaders.IfModifiedSince.HasValue
&& requestHeaders.IfModifiedSince.Value >= file.LastModified)
{
return StatusCode(StatusCodes.Status304NotModified);
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.LastModified = file.LastModified;
return File(file.Content, file.ContentType);
}

If-Unmodified-Since

You can find the description for the If-Modified-Since header in RFC 7232 section 3.4. The use case is as the standard says:

If-Unmodified-Since is most often used with state-changing methods (e.g., POST, PUT, DELETE) to prevent accidental overwrites when multiple user agents might be acting in parallel on a resource that does not supply entity-tags with its representations

Our example will be the DELETE method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[HttpDelete("{id}")]
public IActionResult Delete(string id)
{
if (!fileRepository.Exists(id))
{
return NotFound();
}
// Yeah this you would never do in real life
var file = fileRepository.Get(id);
var requestHeaders = Request.GetTypedHeaders();
if (requestHeaders.IfMatch != null)
{
if (!requestHeaders.IfMatch.All(match => match.Equals(EntityTagHeaderValue.Any)))
{
return StatusCode(StatusCodes.Status412PreconditionFailed);
}
}
else
{
if (requestHeaders.IfUnmodifiedSince.HasValue
&& requestHeaders.IfUnmodifiedSince.Value < file.LastModified)
{
return StatusCode(StatusCodes.Status412PreconditionFailed);
}
}
fileRepository.Delete(id);
return NoContent();
}

Wrap-up

So we have now seen how Last-Modified, If-Modified-Since, If-Unmodified-Since headers can work. Next up is more HTTP features.

Building a basic REST AspNet Core Example - Part 4

If you need to catch up on the previous posts then see part 1 & 2, 3. The source code for this post is at github.

Here we are going to take a look at how to tackle the GET /invoices call. I’ll repeat the disclamer - It is going to be pretty naive and basic and I will build on it in further posts (hopefully). We are still at level 2 in Richardson’s maturity model and will not take hypermedia in just yet. We will not even reach a full GET implementation in this post.

GET invoices/ with HTTP Prefer Header

One of the first things that we will have to think about is what resource representation we want. Now we could say we want to return the complete invoices, or we could say we do not want to return lines. Let’s start with the simplest case. We return a list with the full representation.

1
2
3
4
5
6
7
8
[HttpGet]
public IActionResult Get()
{
var invoices = invoiceRepository.GetAll().Select(
invoice => getInvoiceMapper.ToModel(invoice));
return Ok(invoices);
}

but in many cases that may not be what our client is interested in. We could probably cut the invoice lines. There are many ways in “the wild”, but here we are going to look at the HTTP Prefer Header in RFC 7240 as a way to tell the server how we prefer the response. To shortly introduce Prefer, the RFC says:

The Prefer request header field is used to indicate that particular server behaviors are preferred by the client but are not required for successful completion of the request. Prefer is similar in nature to the Expect header field defined by Section 6.1.2 of [RFC7231] with the exception that servers are allowed to ignore stated preferences.

Section 4.2 specifies

The “return=representation” preference indicates that the client prefers that the server include an entity representing the current state of the resource in the response to a successful request.

The “return=minimal” preference, on the other hand, indicates that the client wishes the server to return only a minimal response to a successful request

which is just what we need here. So we want to be able to handle the following requests:

1
2
3
4
5
GET /invoices HTTP/1.1
Prefer: return=representation
GET /invoices HTTP/1.1
Prefer: return=minimal

and the response should state which prefer’s we have handled

1
2
HTTP/1.1 200 OK
Preference-Applied: return=representation

You can also take a look at the parameters that IANA keeps a list of here. But actually we are also required to return the Vary Header so that we do not mess up the HTTP infrastructure like shown here

1
2
3
4
5
HTTP/1.1 200 OK
Date: Wed, 14 Dec 2016 12:08:04 GMT
Content-Type: application/vnd.restexample.finance+json; charset=utf-8
Vary: Prefer,Accept,Accept-Encoding
Preference-Applied: return=representation

Here’s the implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[HttpGet]
public IActionResult Get()
{
var preferHeader = Request.Headers.Prefer();
if (SupportedRepresentations.Contains(preferHeader.Return))
{
Response.Headers.Add("Preference-Applied", "return=" + preferHeader.Return);
Response.Headers.Add("Vary", VaryHeaderValue);
}
var invoices = invoiceRepository.GetAll();
if (preferHeader.Return == ReturnMinimal)
{
return Ok(invoices.Select(invoice => getMinimalInvoiceMapper.ToModel(invoice)));
}
return Ok(invoices.Select(invoice => getInvoiceMapper.ToModel(invoice)));
}

We use a helper method to get the prefer header. If the representation asked for can be fullfilled then we return the required headers. If representation=minimal is specified then we return a list of invoices that has a minimal model as specified here

1
2
3
4
5
6
7
8
public class GetMinimalInvoice
{
public DateTimeOffset InvoiceDate { get; set; }
public DateTimeOffset DueDate { get; set; }
public GetInvoiceCustomer Customer { get; set; }
public decimal SubTotal { get; set; }
public string Id { get; set; }
}

Wrap-up

That concludes the first view on the GET invoices/.

Building a basic REST AspNet Core Example - Part 3

In the previous part 1 & 2 we build an invoices controller with support for POST/PUT/GET/PATCH http methods. In this one we will add HTTP HEAD method and ETag support - I’ll repeat the disclamer - It is going to be pretty naive and basic, but I will add to it in future posts (hopefully). The source code for this post is at github.

Adding HTTP HEAD support

You can read about the HEAD verb in the HTTP protocol. But the part we are interested in is

The HEAD method is identical to GET except that the server MUST NOT send a message body in the response (i.e., the response terminates at the end of the header section). The server SHOULD send the same header fields in response to a HEAD request as it would have sent if the request had been a GET, except that the payload header fields MAY be omitted

The verb is useful for existence checking, validation that you have the latest version etc. Here we will use it for existance checking to start with. We will omit the Content-Length header since it is allowed to do so, and our use case does not rely on it.

So let’s add a minimum implementation to the InvoicesController

1
2
3
4
5
6
7
8
9
10
11
[HttpHead("{id}")]
public IActionResult HeadForInvoice(string id)
{
Response.ContentType = ApiDefinition.ApiMediaType;
if (!invoiceRepository.Exists(id))
{
return NotFound();
}
return Ok();
}

We’d still like to return the Content-Type so thus we set it on the response.

The solution is still only at richardson’s maturity model level 2. And we can do better in multiple areas.

ETag, If-match, If-none-match

Until now we have not really cared if our PUT call tries to update something even though it may have a stale view of the resource. The ETag header can help us here. We can specify the current entity version on GET response with an ETag header. When we do a PUT we can then specify the “If-match” header with the ETag. Thus we will only update the resource if the ETag matches. But to use the ETag we would want to use it on most Http verbs. The example below uses the invoice version as the ETag - in principle this should have been an “opaque” value instead.

GET

We should return an ETag in the response when a GET request is made. It could be done like this (somewhat clumpsy)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[HttpGet("{id}", Name = "GetInvoice")]
public IActionResult Get(string id)
{
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = new EntityTagHeaderValue($"\"{invoice.Version}\"");
return Ok(getInvoiceMapper.ToModel(invoice));
}

Please note the invoice has been extended with a Version property that is changed when the update method of the repository is called.

So now when we make a request the response headers can look like this

1
2
3
4
5
6
HTTP/1.1 200 OK
Date: Mon, 12 Dec 2016 16:21:21 GMT
Transfer-Encoding: chunked
Content-Type: application/vnd.restexample.finance+json; charset=utf-8
ETag: "e90baf01-40ce-4f46-89ea-227bb32fc421"
Server: Kestrel

We should also be able to handle the ETag if specified in the request. It will make sense to implement support for If-none-match header. We will return a new entity if the ETag does not match the current version of the invoice. If it matches we will return HTTP Not Modified.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[HttpGet("{id}", Name = "GetInvoice")]
public IActionResult Get(string id)
{
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
var currentETag = new EntityTagHeaderValue($"\"{invoice.Version}\"");
if (IfMatchGivenIfNoneMatch(currentETag))
{
return StatusCode((int)HttpStatusCode.NotModified);
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = currentETag;
return Ok(getInvoiceMapper.ToModel(invoice));
}
private bool IfMatchGivenIfNoneMatch(EntityTagHeaderValue currentETag)
{
var requestHeaders = Request.GetTypedHeaders();
return requestHeaders.IfNoneMatch != null &&
requestHeaders.IfNoneMatch.Contains(currentETag);
}

So now we can add the If-None-Match header to our request like this

If-None-Match: “e90baf01-40ce-4f46-89ea-227bb32fc421”

and if it is current we get these response headers

1
2
3
HTTP/1.1 304 Not Modified
Date: Mon, 12 Dec 2016 17:28:02 GMT
Server: Kestrel

PUT/PATCH

Here we are going to return HTTP status Precondition Failed (415) if an ETag is provided and it does not match. The request should have the If-Match header added to it. Here is PUT

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[HttpPut("{id}")]
public IActionResult Put(string id, [FromBody] UpdateInvoice updateInvoice)
{
if (updateInvoice == null)
{
return BadRequest();
}
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
if (!invoiceRepository.Exists(id))
{
return NotFound();
}
if (IfMatchIsInvalid(invoiceRepository.GetCurrentVersion(id)))
{
return StatusCode((int) HttpStatusCode.PreconditionFailed);
}
var newVersion = invoiceRepository.Update(updateInvoiceMapper.ToDomain(
updateInvoice, id)).Value;
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = new EntityTagHeaderValue($"\"{newVersion}\"");
return NoContent();
}
private bool IfMatchIsInvalid(string currentVersion)
{
var requestHeaders = Request.GetTypedHeaders();
var currentETag = new EntityTagHeaderValue($"\"{currentVersion}\"");
return requestHeaders.IfMatch != null &&
!requestHeaders.IfMatch.Any(ifm => ifm.Equals(EntityTagHeaderValue.Any))
&& !requestHeaders.IfMatch.Contains(currentETag);
}

The ETag can be specified as * thus we also check against EntityTagHeaderValue.Any.

We do the same for PATCH

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[HttpPatch("{id}")]
public IActionResult Patch(string id, [FromBody] JsonPatchDocument<UpdateInvoice> patchDocument)
{
if (patchDocument == null)
{
return BadRequest();
}
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
if (IfMatchIsInvalid(invoice.Version))
{
return StatusCode((int) HttpStatusCode.PreconditionFailed);
}
var updateInvoice = updateInvoiceMapper.ToModel(invoice);
patchDocument.ApplyTo(updateInvoice, ModelState);
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
var updatedDomainInvoice = updateInvoiceMapper.ToDomain(updateInvoice, id);
var newVersion = invoiceRepository.Update(updatedDomainInvoice);
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = new EntityTagHeaderValue($"\"{newVersion}\"");
return NoContent();
}

HEAD should behave like GET so we can take that logic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[HttpHead("{id}")]
public IActionResult HeadForInvoice(string id)
{
Response.ContentType = ApiDefinition.ApiMediaType;
Response.GetTypedHeaders();
if (!invoiceRepository.Exists(id))
{
return NotFound();
}
var currentVersion = invoiceRepository.GetCurrentVersion(id);
var currentETag = new EntityTagHeaderValue($"\"{currentVersion}\"");
if (IfMatchGivenIfNoneMatch(currentETag))
{
return StatusCode((int)HttpStatusCode.NotModified);
}
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = currentETag;
return Ok();
}

POST

POST should return an ETag

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[HttpPost]
public IActionResult Post([FromBody] UpdateInvoice updateInvoice)
{
if (updateInvoice == null)
{
return BadRequest();
}
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
var createdInvoice = invoiceRepository.Create(updateInvoiceMapper.ToDomain(updateInvoice));
var responseHeaders = Response.GetTypedHeaders();
responseHeaders.ETag = new EntityTagHeaderValue($"\"{createdInvoice.Version}\"");
return CreatedAtRoute("GetInvoice",
new { id = createdInvoice.Id },
getInvoiceMapper.ToModel(createdInvoice));
}

Wrap-up

That is it for the most basic api for now. But we can do more as we will see in later posts (hopefully).

Building a basic REST AspNet Core Example - Part 2

In the last post we made the first step at a REST Api. In this post we will improve it by adding HTTP PATCH/OPTIONS support. I’ll repeat the disclamer - It is going to be pretty naive and basic, but I will add to it in future posts (hopefully). The source code for this post is here.

Adding HTTP PATCH support

With PUT we have to supply the complete invoice in order to update it. That may not be what you want, so HTTP PATCH (rfc5789) can help us here by allowing partial resource modification.

The PATCH method requests that a set of changes described in the
request entity be applied to the resource identified by the Request-
URI. The set of changes is represented in a format called a “patch
document” identified by a media type

For JSON in .NET core this is implemented using JavaScript Object Notation (JSON) Patch. The media type is “application/json-patch+json”. From Json Patch you can read

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
A JSON Patch document is a JSON [RFC4627] document that represents an
array of objects. Each object represents a single operation to be
applied to the target JSON document.
The following is an example JSON Patch document, transferred in a HTTP PATCH request:
PATCH /my/data HTTP/1.1
Host: example.org
Content-Length: 326
Content-Type: application/json-patch+json
If-Match: "abc123"
[
{ "op": "test", "path": "/a/b/c", "value": "foo" },
{ "op": "remove", "path": "/a/b/c" },
{ "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] },
{ "op": "replace", "path": "/a/b/c", "value": 42 },
{ "op": "move", "from": "/a/b/c", "path": "/a/b/d" },
{ "op": "copy", "from": "/a/b/d", "path": "/a/b/e" }
]

So { “op”: “replace”, “path”: “/a/b/c”, “value”: 42 } tells us that the operation should replace the target value with 42. The path is a JSON Pointer. If you want to take a look at the aspnet source code then look here. From the unit test you can also see that the “test” operation is not supported.

So let us extend our InvoicesController

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[HttpPatch("{id}")]
public IActionResult Patch(string id, [FromBody] JsonPatchDocument<UpdateInvoice> patchDocument)
{
if (patchDocument == null)
{
return BadRequest();
}
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
var updateInvoice = updateInvoiceMapper.ToModel(invoice);
patchDocument.ApplyTo(updateInvoice, ModelState);
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
var updatedDomainInvoice = updateInvoiceMapper.ToDomain(updateInvoice, id);
invoiceRepository.Update(updatedDomainInvoice);
return NoContent();
}

In the method our input is now JsonPatchDocument meaning it is an UpdateInvoice model we perform the operations on. Again we start out with a validation if the input is malformed. Next we handle the case where the invoice is not found. Since we perform the operations against an instance of UpdateInvoice here, then we must construct that instance first from the domain model. So ToModel transforms the loaded entity to our model representation. Then the patch operations are applied using patchDocument.ApplyTo. We also pass in the ModelState so that the validation will still kick in. After ApplyTo we check the ModelState. Last step it to update the invoice using the repository.

HTTP Options

To provide a more complete REST Api we can choose to implement the HTTP OPTIONS verb.

The OPTIONS method represents a request for information about the communication options available on the request/response chain identified by the Request-URI. This method allows the client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action or initiating a resource retrieval.

Using it we can determine which other Http Verbs that are available at a given uri. We can use this to eg. remove Http Put/Patch methods when no instance exists. The methods that are available should be specified in the allow header

Allow: GET, POST, PUT

So let’s add the first OPTIONS method for the /invoices uri

1
2
3
4
5
6
[HttpOptions]
public IActionResult Options()
{
Response.Headers.Add("Allow", string.Join(",", HttpVerbs.Options, HttpVerbs.Post));
return NoContent();
}

Now when we make a OPTIONS request we get

1
2
3
HTTP/1.1 204 No Content
Date: Mon, 12 Dec 2016 10:22:07 GMT
Allow: OPTIONS,POST

basically saying all we can do is create a new invoice.

When we call HTTP OPTIONS on an invoice we should return OPTIONS,GET,PUT,PATCH if the invoice exists. If not then “OPTIONS, GET”. I include GET since it can return a 404 as the response.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpOptions("{id}")]
public IActionResult OptionsForInvoice(string id)
{
if (!invoiceRepository.Exists(id))
{
Response.Headers.Add("Allow", string.Join(",",
HttpVerbs.Options,
HttpVerbs.Get));
}
else
{
Response.Headers.Add("Allow", string.Join(",",
HttpVerbs.Options,
HttpVerbs.Get,
HttpVerbs.Put,
HttpVerbs.Patch));
}
return NoContent();
}

Wrap up

That’s it for this time, but we are still not there so other posts will follow.

Building a basic REST AspNet Core Example - Part 1

In this post we will start on an app that exposes a REST web api for an invoice. Disclaimer - It is going to be pretty naive and basic, but I will add to it in further posts (hopefully). So actually we will for now go no higher in richardson’s maturity model than level 2.

My inspiration for this blog is attending skillsmatters “Fast-Track to RESTful Microservices” course with Jim Webber. It is highly based on the book REST in Practice: Hypermedia and Systems Architecture. Although the book is now old it still contains good information (code examples are naturally outdated) - so read it if you need extensive REST information, but can live with not all is up to date.

The case here is creating, updating and getting an invoice along the lines of these user stories.

1
2
3
As a <business user>
I want <to be able to create an invoice for a customer>
So that <we can facilitate sales>
1
2
3
As a <business user>
I want <to be able to update an invoice>
So that <it can be created in steps>
1
2
3
As a <business user>
I want <to be able to view the invoice>
So that <I can answer questions related to it>

Getting a list of invoices, searching etc is too advanced for now.

The server code will be .NET core aspnet web api (and for the fun I’m developing on ubuntu with Jetbrains Rider). It will be without a real database backing it for now - since the focus is mostly on the REST part. It does take a pretty naive approach on design of the API, but hopefully I will get to a post where this part is evolved.

You can find the code discussed at github in the 1_InvoiceApi folder.

The Domain

We will call our bounded context finance, and the primary class will be the invoice. We will start with this naive definition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
namespace Finance.Domain.Domain
{
public sealed class Invoice
{
public Invoice(string id, DateTimeOffset invoiceDate,
DateTimeOffset dueDate,
InvoiceCustomer invoiceCustomer,
IEnumerable<InvoiceLine> lines)
{
InvoiceDate = invoiceDate;
DueDate = dueDate;
Customer = invoiceCustomer;
Lines = lines;
Id = id;
}
public string Id { get; set; }
public DateTimeOffset InvoiceDate { get; }
public DateTimeOffset DueDate { get; }
public InvoiceCustomer Customer { get; }
public IEnumerable<InvoiceLine> Lines { get; }
public Amount SubTotal => new Amount(Lines.Sum(l => l.Total.Value));
}
}

The customer and lines are not important for understanding at present time.

The repository

The invoice is accompanied with an InvoiceReposity class implementing IRepository<Invoice> that looks like this

1
2
3
4
5
6
7
public interface IRepository<T>
{
T Get(string id);
void Update(T instance);
T Create(T instance);
bool Exists(string id);
}

for now we have a list of 1000 prepopulated invoices with id 1-1000. The repository is a simple dictionary.

Looking for the API

We will center the API around what the business does - that is its domain and the mentioned user stories. With Level 1 in the REST maturity level with introduce resource representations. We use urls to differentiate between these resource representations. So in our case it will be an invoice resource representation. Remember the resource representation does not have to match our domain model. It will be designed from what information we wish to conway, and we may have multiple resource representations for the same domain invoice.

So since we are going for a CRU(D) implementation here our invoice API can look like this

Http Verb Uri Usage
GET invoices/{id} Get invoice representation
POST invoices Create a new invoice
PUT invoices/{id} Update an existing invoice

Actually we are already getting a good start on Level 2 Http in the maturity model. We are using the Http protocol verbs, and we must make sure to use status codes correctly, content types and all the other good stuff (safe verbs and idempotency). Now you could ask where the GET invoice/ is. For now it is not there - remember we had 1000 invoices prepopulated - it will need a little more design.

Deciding on a media type

We need to chose a media type for the http calls. Since json is “in” at the moment we could choose application/json. However we would like to have something less general purpose to be able to tell the client about the expected processing model (like introduce hypermedia later). So we create a media type in the vendor tree

application/vnd.restexample.finance+json

assuming we as the owner are called “restexample”. We also added the application name “finance”, but should probably not break it further down, since it will put demands on the client if we have multiple media types. The suffix +json now tells what the underlying representation is.

So in order for us to return any information the client must specify the http header

Accept: application/vnd.restexample.finance+json

or any variant such as */*, application/*.

To enforce the media type we add the following to the Startup class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc(LimitFormattersToThisApplication);
}
private void LimitFormattersToThisApplication(MvcOptions options)
{
options.RespectBrowserAcceptHeader = true;
options.ReturnHttpNotAcceptable = true;
var customMediaType = new MediaTypeHeaderValue(ApiDefinition.ApiMediaType);
options.SetInputMediaType(customMediaType);
options.SetOutputMediaType(customMediaType);
}
public static class MvcOptionsExtensions
{
public static void SetInputMediaType(this MvcOptions options, MediaTypeHeaderValue mediaType)
{
var supportedInputMediaTypes = options
.InputFormatters
.OfType<JsonInputFormatter>()
.First()
.SupportedMediaTypes;
SetAllowedMediaType(mediaType, supportedInputMediaTypes);
}
public static void SetOutputMediaType(this MvcOptions options, MediaTypeHeaderValue mediaType)
{
var supportedOutputMediaTypes = options
.OutputFormatters
.OfType<JsonOutputFormatter>()
.First()
.SupportedMediaTypes;
SetAllowedMediaType(mediaType, supportedOutputMediaTypes);
}
private static void SetAllowedMediaType(MediaTypeHeaderValue mediaType,
MediaTypeCollection supportedInputMediaTypes)
{
supportedInputMediaTypes.Clear();
supportedInputMediaTypes.Add(mediaType);
}
}

The extension methods sets the output and input media type. Note ReturnHttpNotAcceptable is set to true in order to return

406 Not Acceptable

when the reponse format requested in the accept header does not match our custom media type.

Invoice resource representation

GET

For now we will use this invoice representation for GET that matches the domain representation except for some value object representations in the domain (DDD term)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"id": "1000",
"invoiceDate": "2016-12-05T00:00Z",
"dueDate": "2016-12-12T00:00Z",
"subTotal": 9050,
"customer": {
"name": "Microsoft Development Center",
"addressLines": ["Kanalvej 8", "2800 Kongens Lyngby"]
},
"lines": [
{
"lineNumber": 1,
"description": "consultancy",
"quantity": 10,
"itemPrice": 905,
"total": 9050
}
]
}

That raises the question of what happens to “total” values on POST or PUT. The answer is we need a different representation without these fields.

PUT/POST

Besides removing the total values in the resource representation we also remove the id, ie. we will not let it be up to the client to set it (could be a valid option).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"invoiceDate": "2016-12-05T00:00Z",
"dueDate": "2016-12-12T00:00Z",
"customer": {
"name": "Microsoft Development Center",
"addressLines": ["Kanalvej 8", "2800 Kongens Lyngby"]
},
"lines": [
{
"lineNumber": 1,
"description": "consultancy",
"quantity": 10,
"itemPrice": 905
}
]
}

The Invoice controller

For now we are not going all in on DDD with a separate application service layer, but rather survive with that code in the controller.

Get

So here is the GET implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Route("invoice")]
public sealed class InvoicesController : Controller
{
...
[HttpGet("{id}", Name = "GetInvoice")]
public IActionResult Get(string id)
{
var invoice = invoiceRepository.Get(id);
if (invoice == null)
{
return NotFound();
}
return Ok(mapper.ToModel(invoice));
}
...

We use a literal in the Route attribute to avoid issues with refactoring. If we cannot find the invoice then we return a 404 Not found. Else we return the invoice. It is mapped to this GET model. Notice the HttpGet specifies a Name. This is used in the Post action later.

1
2
3
4
5
6
7
8
9
public sealed class GetInvoice
{
public DateTimeOffset InvoiceDate { get; set; }
public DateTimeOffset DueDate { get; set; }
public GetInvoiceCustomer Customer { get; set; }
public decimal SubTotal { get; set; }
public IEnumerable<GetInvoiceLine> Lines { get; set; }
public string Id { get; set; }
}

I am sure GetInvoice will cause some discussion for now. You could do without different representations for the Http verb’s, but then that would impact on the validation etc. That is the reason behind the choice here as you will see in a moment on POST.

POST

In post we have more to take into account.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpPost]
public IActionResult Post([FromBody] UpdateInvoice updateInvoice)
{
if (updateInvoice == null)
{
return BadRequest();
}
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
var createdInvoice = invoiceRepository.Create(updateInvoiceMapper.ToDomain(updateInvoice));
return CreatedAtRoute("GetInvoice",
new { id = createdInvoice.Id },
getInvoiceMapper.ToModel(createdInvoice));
}

First of all the request could be malformed in such a way that updateInvoice is null. We will return BadRequest then.

Next is to check if the model is valid. This is one of the reasons for having UpdateInvoice, UpdateInvoiceLine, UpdateInvoiceCustomer models where we decorate with data annotations like shown here

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public sealed class UpdateInvoice
{
public DateTimeOffset InvoiceDate { get; set; }
public DateTimeOffset DueDate { get; set; }
public UpdateInvoiceCustomer Customer { get; set; }
public IEnumerable<UpdateInvoiceLine> Lines { get; set; }
}
public sealed class UpdateInvoiceLine
{
public int LineNumber { get; set; }
public decimal Quantity { get; set; }
public decimal ItemPrice { get; set; }
[Required(ErrorMessage = "Invoice line description must be specified")]
[MaxLength(500, ErrorMessage = "Invoice line description is too long")]
public string Description { get; set; }
}

If everything goes well we will return the newly created invoice with a 201 and a Location header for the new resource like shown here

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
HTTP/1.1 201 Created
Date: Thu, 08 Dec 2016 12:40:35 GMT
Transfer-Encoding: chunked
Content-Type: application/vnd.restexample.finance+json; charset=utf-8
Location: http://localhost:5000/invoice/1002
{
"invoiceDate":"2016-12-05T00:00:00+00:00",
"dueDate":"2016-12-12T00:00:00+00:00",
"customer":{
"name":"Microsoft Development Center",
"addressLines":["Kanalvej 8","2800 Kongens Lyngby"]
},
"subTotal":0.0,
"lines":[
{"lineNumber":1,"description":"test","quantity":10.0,"itemPrice":1.0,"total":10.0}
],
"id":"1002"
}

Note that when calling POST we have to add the headers

Accept: application/vnd.restexample.finance+json
Content-Type: application/vnd.restexample.finance+json

However the code here in the POST is not pretty when it comes to DRY, but that will be a topic for a later blog.

PUT

With Put we wish to update the invoice. We must supply the full payload for updating. We will in a later blog look at the Patch verb.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[HttpPut("{id}")]
public IActionResult Put(string id, [FromBody] UpdateInvoice updateInvoice)
{
if (updateInvoice == null)
{
return BadRequest();
}
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
if (!invoiceRepository.Exists(id))
{
return NotFound();
}
invoiceRepository.Update(updateInvoiceMapper.ToDomain(updateInvoice, id));
return NoContent();
}

The two first validations are simular to Post. Then we check if the invoice exists and return NotFound if not. If we can update then we return NoContent (the client has the invoice already).

Wrap-up

That is it for the most basic api for now. But we can do much more as we will see in later posts (hopefully).

Automate VM deployment with Azure + Docker

We recently started to use Elasticsearch for search within our application. Since we wanted an infrastructure where we could provision multiple services to a cluster we decided to build a VM cluster with docker hosts on each VM.

To be able to quickly reproduce the cluster we decided to automate this task. That is to create the Azure parts needed and the deployment of docker containers. You can do this using Powershell, Azure CLI, kubernetes, mesos etc. We grabbed the chance to use the .NET Azure Management API - so we had the full disposal of C# for a range of custom tasks.

The end result was a .Net command line application that takes a json file with a list of actions, and spin up x number of virtual machines with docker installed and the containers needed. For now we spin up Nginx as a reverse proxy and Elasticsearch behind it.

So this is what we automated:

At Azure level: creating/updating

  • a virtual network for the VMs
  • storage account for VM harddisks
  • cloud service for the VMs
  • uploading ssh keys for use with Linux VMs
  • virtual machines, including
    – using resource extensions to install the docker host
    – adding custom disks to the VMs
    – adding load balancers with a probe

and a hole range of custom tasks for setting up the docker hosts and their storage. You could use Puppet/Chef for this - but in order not to bring in another tool we did it with simple bash scripts.

If this is something you want to do then here is an overview of what we did. To get started all you need to do is to include a range of nuget management packages to a project. They are all starting with the name “Microsoft.WindowsAzure.Management” and then a postfix for each area you can manage, ie .Compute, .Network, .WebSites and so on.

Configuring the network

Since we want the cluster to be on its own network then we need to setup a local network. We can use the NetworkManagementClient for this task. However since the API is limited to getting/setting xml we must do a little messy XDocument to add a network. But first let us look at an example of the xml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?xml version="1.0" encoding="utf-8"?>
<NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="ht
tp://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/Ser
viceHosting/2011/07/NetworkConfiguration">
<VirtualNetworkConfiguration>
<Dns />
<VirtualNetworkSites>
<VirtualNetworkSite name="test" Location="North Europe">
<AddressSpace>
<AddressPrefix>10.0.0.0/8</AddressPrefix>
</AddressSpace>
<Subnets>
<Subnet name="Subnet">
<AddressPrefix>10.0.0.0/11</AddressPrefix>
</Subnet>
</Subnets>
</VirtualNetworkSite>
</VirtualNetworkSites>
</VirtualNetworkConfiguration>
</NetworkConfiguration>

Here we have a network defined by the VirtualNetworkSite node named “test” and to be hosted in the “North Europe” datacenter. We allocate an overall address space 10.0.0.0/8 and a subnet 10.0.0.0/11. That is basically all we need (if you want the full overview of the schema then look here. So let us look at an example of how this can be done:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
public static async Task RunAsync(CertificateCloudCredentials credentials, Action<string, LogType> log, UpdateNetworkSettings settings)
{
// .. excluded a bunch of code guards
using (
NetworkManagementClient client = CloudContext.Clients.CreateVirtualNetworkManagementClient(credentials))
{
string configurationXml = BaseNetworkConfigurationXml;
try
{
NetworkGetConfigurationResponse networkConfiguration = await client.Networks.GetConfigurationAsync();
configurationXml = networkConfiguration.Configuration;
}
catch (CloudException e)
{
if (e.Response.StatusCode != System.Net.HttpStatusCode.NotFound)
{
throw;
}
}
XDocument document = XDocument.Parse(configurationXml);
XNamespace ns = @"http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration";
if (document.Root == null)
{
throw new InvalidOperationException("No network configuration element in the xml");
}
XElement configuration = document.Root.Element(ns + VirtualNetworkConfigurationElementName);
if (configuration == null)
{
configuration = new XElement(ns + VirtualNetworkConfigurationElementName);
document.Root.Add(configuration);
}
XElement sites = configuration.Element(ns + VirtualNetworkSitesElementName);
if (sites == null)
{
sites = new XElement(ns + VirtualNetworkSitesElementName);
configuration.Add(sites);
}
XElement site =
sites.Elements(ns + VirtualNetworkSiteElementName)
.FirstOrDefault(s => s.Attribute("name") != null && s.Attribute("name").Value == settings.Name);
if (site == null)
{
site = new XElement(ns + VirtualNetworkSiteElementName);
sites.Add(site);
}
else if (!settings.UpdateExisting)
{
return;
}
site.SetAttributeValue("name", settings.Name);
site.SetAttributeValue("Location", settings.Location);
List<XElement> subnets = settings.Subnets.Select(subnetDefinition =>
new XElement(ns + SubnetElementName, new XAttribute("name", subnetDefinition.Name),
new XElement(ns + AddressPrefixElementName, new XText(subnetDefinition.Addresses)))).ToList();
site.ReplaceNodes(new XElement(ns + AddressSpaceElementName,
new XElement(ns + AddressPrefixElementName, new XText(settings.AddressSpace))),
new XElement(ns + SubnetsElementName, subnets));
await client.Networks.SetConfigurationAsync(new NetworkSetConfigurationParameters(document.ToString()));
}
}
}

The purpose of this method is to either create or update a local network based on the settings parameter passed in. So first we get a NetworkManagementClient, and then we attempt to get the configuration xml. However, if no networks have been defined we will get a Not Found http status in return. In that case we will just use a xml template defined in BaseNetworkConfigurationXml. Then we do some xml-exercise to either update or add nodes/attributes. Finally we call SetConfigurationAsync to store the new configuration (hopefully you will not need to change your network configuration in parallel - since what would break pretty easily).

Creating a storage account

We need a storage account to place the VMs on - and in our case we need to add additional disks to the VMs.

Here we use the StorageManagementClient to either create or update the account. Here’s an example of how that could be done:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
private const string BlobConnectionString = "DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}";
public static async Task RunAsync(
CertificateCloudCredentials credentials,
Action<string, LogType> log,
UpdateStorageAccountSettings settings)
{
/* .. excluded a range of code guards **/
using (StorageManagementClient client = CloudContext.Clients.CreateStorageManagementClient(credentials))
{
string name = settings.Name.ToLowerInvariant();
StorageAccountGetResponse existingStorageAccount = null;
try
{
existingStorageAccount = await client.StorageAccounts.GetAsync(name);
}
catch (CloudException e)
{
if (e.Response.StatusCode != HttpStatusCode.NotFound)
{
throw;
}
}
if (existingStorageAccount == null)
{
await Create(settings, name, client);
}
else
{
await Update(settings, name, client);
}
if (settings.Containers != null && settings.Containers.Count > 0)
{
StorageAccountGetKeysResponse storageKeys = await client.StorageAccounts.GetKeysAsync(name);
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
string.Format(CultureInfo.InvariantCulture, BlobConnectionString, name, storageKeys.PrimaryKey));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
foreach (string container in settings.Containers)
{
log(string.Format(
CultureInfo.InvariantCulture,
@"Checking/Creating container {0} in storage account {1}",
container, name), LogType.Information);
CloudBlobContainer containerReference = blobClient.GetContainerReference(container);
await containerReference.CreateIfNotExistsAsync();
}
}
}
}
private static async Task Create(UpdateStorageAccountSettings settings, string name, StorageManagementClient client)
{
StorageAccountCreateParameters createParameters = new StorageAccountCreateParameters {
AccountType = settings.Type,
Name = name,
Label = name.ToBase64(),
Description =
settings.Description,
};
if (!string.IsNullOrWhiteSpace(settings.AffinityGroup))
{
createParameters.AffinityGroup = settings.AffinityGroup;
}
else
{
createParameters.Location = settings.Location;
}
await client.StorageAccounts.CreateAsync(createParameters);
}
// Update method omitted ... pretty similar to the Create part

We start by looking up the storage account. If it does not exist we call the Create method where the real work is done. The remaining part of RunAsync just creates a list of containers specified in the settings we pass in. Only really interesting thing in Create is that you have to specify an account type, ie. “Standard_LRS”. You can see the list of options [here] (https://msdn.microsoft.com/en-us/library/azure/ee460802.aspx).

Create a cloud service to host the VMs

The virtual machines will run inside a cloud service. We use ComputeManagementClient to create it. Basically all we have to supply is a name for the service, and a location (or affinity group).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
public static async Task RunAsync(
CertificateCloudCredentials credentials,
Action<string, LogType> log,
UpdateCloudServiceSettings settings)
{
/* .. excluded a range of code guards **/
using (ComputeManagementClient client = CloudContext.Clients.CreateComputeManagementClient(credentials))
{
HostedServiceGetResponse existingService = null;
try
{
existingService = await client.HostedServices.GetAsync(settings.Name);
}
catch (CloudException e)
{
if (e.Response.StatusCode != HttpStatusCode.NotFound)
{
throw;
}
}
if (existingService == null)
{
HostedServiceCreateParameters parameters = new HostedServiceCreateParameters {
Description = settings.Description,
Label = settings.Name.ToBase64(),
ServiceName = settings.Name
};
if (!string.IsNullOrWhiteSpace(settings.AffinityGroup))
{
parameters.AffinityGroup = settings.Name;
}
else
{
parameters.Location = settings.Location;
}
await client.HostedServices.CreateAsync(parameters);
}
else
{
await client.HostedServices.UpdateAsync(
existingService.ServiceName,
new HostedServiceUpdateParameters { Description = settings.Description });
}
}
}

Uploading SSH certificate for the linux VMs

We want our SSH to use a certificate and disable password based authentication. So we upload an SSL certificate like this (a pem file):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
public static async Task RunAsync(
CertificateCloudCredentials credentials,
Action<string, LogType> log,
UploadCertificateSettings settings)
{
/* .. excluded a range of code guards **/
using (ComputeManagementClient client = CloudContext.Clients.CreateComputeManagementClient(credentials))
{
ServiceCertificateGetResponse certificate = null;
X509Certificate2 certificateToUpload = new X509Certificate2(settings.File);
try
{
certificate = await client.ServiceCertificates.GetAsync(
new ServiceCertificateGetParameters(
settings.ServiceName,
"sha1",
certificateToUpload.Thumbprint));
}
catch (CloudException e)
{
if (e.Response.StatusCode != HttpStatusCode.NotFound)
{
throw;
}
}
if (certificate == null)
{
log("Uploading certificate", LogType.Information);
byte[] certificateContent = certificateToUpload.Export(X509ContentType.Pfx);
await
client.ServiceCertificates.CreateAsync(
settings.ServiceName,
new ServiceCertificateCreateParameters
{
CertificateFormat = CertificateFormat.Pfx,
Data = certificateContent
});
}
else
{
log("Certificate already uploaded", LogType.Information);
}
}
}

We first try to lookup (based on the thumbprint) if the certificate is already there. If not we upload it using CreateAsync. It must be exported to Pfx first. So now we have all the parts ready for creating the virtual machines.

Creating the VMs

When creating the virtual machines the interesting part is that we want to use the Docker Resource Extension. This is something you get for free when you use the Azure CLI (not in Powershell at the time I wrote this code). There is more coding to this, so let us break up the process.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public static async Task RunAsync(
CertificateCloudCredentials credentials,
Action<string, LogType> log,
UpdateVirtualMachinesInCloudServiceSettings settings)
{
/* .. excluded a range of code guards **/
using (ComputeManagementClient client = CloudContext.Clients.CreateComputeManagementClient(credentials))
{
DeploymentGetResponse deployment = null;
try
{
// Check if the deployment exists
deployment = await client.Deployments.GetByNameAsync(
settings.ServiceName, settings.ServiceName);
}
catch (CloudException e)
{
if (e.Response.StatusCode != HttpStatusCode.NotFound)
{
throw;
}
}
...

VMs are placed in a deployment in the cloud service (you have a staging and a production slot in the deployment). So first step we do is to lookup the deployment the VM is to be added to. Now depending on whether we are creating the first VM, any subsequent one, or updating existing we will have to use different parts of the API. Let us take a look at how to create the first VM. A VM is a Role in the API. So to create a VM we could do this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
private static async Task CreateFirst(ComputeManagementClient client, UpdateVirtualMachinesInCloudServiceSettings settings, VirtualMachine virtualMachineSettings)
{
Role role = CreateRole(virtualMachineSettings);
VirtualMachineCreateDeploymentParameters createParameters = new VirtualMachineCreateDeploymentParameters
{
DeploymentSlot = DeploymentSlot.Production,
Label = settings.ServiceName,
Name = settings.ServiceName,
Roles = new List<Role> { role },
LoadBalancers = CreateLoadBalancers(settings.LoadBalancers)
};
if (!string.IsNullOrWhiteSpace(settings.VirtualNetworkName))
{
createParameters.VirtualNetworkName = settings.VirtualNetworkName;
}
await client.VirtualMachines.CreateDeploymentAsync(settings.ServiceName, createParameters);
}
private static Role CreateRole(VirtualMachine virtualMachine)
{
Role role = new Role
{
ProvisionGuestAgent = true,
RoleName = virtualMachine.Name,
Label = virtualMachine.Name,
ConfigurationSets = new List<ConfigurationSet>(),
RoleSize = virtualMachine.Size,
RoleType = VirtualMachineRoleType.PersistentVMRole.ToString(),
OSVirtualHardDisk = GetOsVirtualHardDisk(virtualMachine)
};
if (!string.IsNullOrEmpty(virtualMachine.AvailabilitySet))
{
role.AvailabilitySetName = virtualMachine.AvailabilitySet;
}
if (virtualMachine.ResourceExtensions != null)
{
role.ResourceExtensionReferences = CreateResourceExtensions(virtualMachine.ResourceExtensions);
}
if (virtualMachine.DataDisks != null)
{
role.DataVirtualHardDisks = CreateDataDisks(virtualMachine.DataDisks);
}
ConfigureMachine(virtualMachine, role.ConfigurationSets);
ConfigureNetwork(virtualMachine.NetworkConfiguration, role.ConfigurationSets);
return role;
}

Here in the CreateRole method the virtualMachine parameter is configuration settings for the VM. The Role class contains the basic information about the VM like the name, machine size. The OSVirtualHardDisk refers to the OS image we want to base the VM on. Since we want the cluster to be resilient to Azure downtime we have the ability to specify an AvailabilitySet.

When specifying the role we provide an image the VM should be created with. Here is the helper function GetOsVirtualHardDisk:

1
2
3
4
5
6
7
private static OSVirtualHardDisk GetOsVirtualHardDisk(VirtualMachine virtualMachine)
{
return new OSVirtualHardDisk {
MediaLink = new Uri(string.Format(CultureInfo.InvariantCulture, StorageUrl, virtualMachine.StorageAccount, virtualMachine.StoragePath)),
SourceImageName = virtualMachine.ImageName
};
}

Here we refer to a standard image (ie b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140618.1-en-us-30GB).

To deploy the docker host we need to specify a resource extension. This is what is covered in the CreateResourceExtensions method. Perhaps it may also be useful to see the json that is specified for this method

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
"resourceExtensions": [
{
"name": "DockerExtension",
"publisher": "MSOpenTech.Extensions",
"referenceName": "DockerExtension",
"version": "0.3",
"state": "enable",
"useConfigurator": true,
"resourceExtensionParameterValues": [
{
"key": "certificateDirectory",
"value": "{certificateDirectory}"
},
{
"key": "dockerPort",
"value": "4243"
}
]
}
],
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
private static IList<ResourceExtensionReference> CreateResourceExtensions(
IEnumerable<ResourceExtensionReferenceSettings> referenceSettings)
{
List<ResourceExtensionReference> references = new List<ResourceExtensionReference>();
foreach (ResourceExtensionReferenceSettings settings in referenceSettings)
{
IList<ResourceExtensionParameterValue> resourceExtensionParameterValues;
if (!settings.UseConfigurator)
{
resourceExtensionParameterValues =
settings.ResourceExtensionParameterValues.Select(
values =>
new ResourceExtensionParameterValue
{
Key = values.Key,
Type = values.Type,
Value = values.Value
}).ToList();
}
else
{
bool hasConfigurator = ResourceExtensionConfigurators.ContainsKey(settings.Name);
if (!hasConfigurator)
{
throw new InvalidOperationException(
string.Format(
CultureInfo.InvariantCulture,
"Unknown resource extension {0} - found no registered configurator. Cannot continue.",
settings.Name));
}
resourceExtensionParameterValues =
ResourceExtensionConfigurators[settings.Name](settings.ResourceExtensionParameterValues);
}
ResourceExtensionReference reference = new ResourceExtensionReference
{
Name = settings.Name,
Publisher = settings.Publisher,
ReferenceName = settings.ReferenceName,
State = settings.State,
Version = settings.Version,
ResourceExtensionParameterValues = resourceExtensionParameterValues
};
references.Add(reference);
}
return references;
}

The net result is that we need to specify an ResourceExtensionReference instance for docker where,

  • the Name is “DockerExtension”
  • Publisher is “MSOpenTech.Extensions”
  • ReferenceName is “DockerExtension”
  • and a version eg. “0.3” and two sets of key-value parameters
    – one is key = “dockerPort” with eg value = “4243”
    – and then the certificates to secure docker

As you can see from the code above we have a configurator configuration for parameters that requires special handling. So to configure the docker resource extension we use this configurator.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
public static class DockerExtensionConfigurator
{
private const string CertificateDirectoryParameterName = "certificateDirectory";
private const string DockerPortParameterName = "dockerPort";
private const string ParameterNotFound = "Resource extension parameter {0} must be specified";
private const string FileNotFound = "File {0} not found";
private const string CaCertFileName = "ca.pem";
private const string ServerCertFileName = "server-cert.pem";
private const string ServerKeyFileName = "server-key.pem";
/// <summary>
/// Configure docker extension
/// </summary>
public static IList<ResourceExtensionParameterValue> Configure(
ICollection<ResourceExtensionParameterValueSettings> parameters)
{
List<ResourceExtensionParameterValue> convertedParameters = new List<ResourceExtensionParameterValue>();
ResourceExtensionParameterValue certificates = GetCertificates(parameters);
ResourceExtensionParameterValue dockerPort = GetDockerPort(parameters);
convertedParameters.Add(certificates);
convertedParameters.Add(dockerPort);
return convertedParameters;
}
private static ResourceExtensionParameterValue GetDockerPort(IEnumerable<ResourceExtensionParameterValueSettings> parameters)
{
ResourceExtensionParameterValueSettings dockerPort =
parameters.FirstOrDefault(parameter => parameter.Key == DockerPortParameterName);
if (dockerPort == null || string.IsNullOrWhiteSpace(dockerPort.Value))
{
throw new InvalidOperationException(
string.Format(CultureInfo.InvariantCulture, ParameterNotFound, DockerPortParameterName));
}
JObject value = new JObject(new JProperty("dockerport", dockerPort.Value));
return new ResourceExtensionParameterValue {
Type = "Public",
Key = "ignored",
Value = value.ToString(Formatting.None)
};
}
private static ResourceExtensionParameterValue GetCertificates(IEnumerable<ResourceExtensionParameterValueSettings> parameters)
{
ResourceExtensionParameterValueSettings certificateDirectory =
parameters.FirstOrDefault(parameter => parameter.Key == CertificateDirectoryParameterName);
if (certificateDirectory == null || string.IsNullOrWhiteSpace(certificateDirectory.Value))
{
throw new InvalidOperationException(
string.Format(CultureInfo.InvariantCulture, ParameterNotFound, CertificateDirectoryParameterName));
}
string caFileName = Path.Combine(certificateDirectory.Value, CaCertFileName);
string serverCertFileName = Path.Combine(certificateDirectory.Value, ServerCertFileName);
string serverKeyFileName = Path.Combine(certificateDirectory.Value, ServerKeyFileName);
CheckFileExists(caFileName);
CheckFileExists(serverCertFileName);
CheckFileExists(serverKeyFileName);
JObject value = new JObject(
new JProperty("ca", GetFileAsBase64(caFileName)),
new JProperty("server-cert", GetFileAsBase64(serverCertFileName)),
new JProperty("server-key", GetFileAsBase64(serverKeyFileName)));
ResourceExtensionParameterValue certificates = new ResourceExtensionParameterValue
{
Key = "ignored",
Value = value.ToString(Formatting.None),
Type = "Private"
};
return certificates;
}
private static string GetFileAsBase64(string fileName)
{
return Convert.ToBase64String(File.ReadAllBytes(fileName));
}
private static void CheckFileExists(string fileName)
{
if (!File.Exists(fileName))
{
throw new FileNotFoundException(
string.Format(CultureInfo.InvariantCulture, FileNotFound, fileName),
fileName);
}
}
}

Basically the certificate part is just a question of providing a json value with base64 encodings for the certificates.

Back to CreateRole besides creating data disks which is just providing a list of DataVirtualHardDisk instances. Then the roles ConfigurationSets must be specified. This is done in

1
2
3
4
...
ConfigureMachine(virtualMachine, role.ConfigurationSets);
ConfigureNetwork(virtualMachine.NetworkConfiguration, role.ConfigurationSets);
...

This first one configures the user name, password, hostname, timezone etc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
private static void ConfigureMachine(VirtualMachine virtualMachine, IList<ConfigurationSet> configurationSets)
{
ConfigurationSet machineConfiguration = new ConfigurationSet
{
ResetPasswordOnFirstLogon = false,
EnableAutomaticUpdates = false,
ComputerName = virtualMachine.Name,
AdminUserName = virtualMachine.AdminUserName,
AdminPassword = virtualMachine.AdminPassword,
HostName = virtualMachine.Name,
SubnetNames = new List<string>(),
};
configurationSets.Add(machineConfiguration);
if (!string.IsNullOrWhiteSpace(virtualMachine.Timezone))
{
machineConfiguration.TimeZone = virtualMachine.Timezone;
}
if (virtualMachine.Type.ToLowerInvariant() == WindowsProvisioningConfiguration)
{
...
}
else
{
machineConfiguration.ConfigurationSetType = ConfigurationSetTypes.LinuxProvisioningConfiguration;
machineConfiguration.DisableSshPasswordAuthentication = false;
machineConfiguration.UserName = virtualMachine.AdminUserName;
machineConfiguration.UserPassword = virtualMachine.AdminPassword;
if (virtualMachine.Ssh != null)
{
if (string.IsNullOrWhiteSpace(virtualMachine.Ssh.CertificateFile)
|| !File.Exists(virtualMachine.Ssh.CertificateFile))
{
throw new InvalidOperationException("SSH certificate file does not exist or is not specified");
}
machineConfiguration.SshSettings =
new Microsoft.WindowsAzure.Management.Compute.Models.SshSettings();
machineConfiguration.DisableSshPasswordAuthentication = true;
X509Certificate2 certificate = new X509Certificate2();
certificate.Import(virtualMachine.Ssh.CertificateFile);
machineConfiguration.SshSettings.PublicKeys.Add(
new SshSettingPublicKey(
certificate.Thumbprint,
string.Format(CultureInfo.InvariantCulture, "//home/{0}//.ssh//authorized_keys", virtualMachine.AdminUserName)));
}
}
}

If we have specified ssh in the configuration then we refer to the previously uploaded certificate. Then for the network,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
private static void ConfigureNetwork(VirtualMachineNetworkConfiguration network, IList<ConfigurationSet> configurationSets)
{
ConfigurationSet networkConfiguration = new ConfigurationSet {
ConfigurationSetType = ConfigurationSetTypes.NetworkConfiguration,
SubnetNames = new List<string>()
};
configurationSets.Add(networkConfiguration);
if (!string.IsNullOrWhiteSpace(network.Ip))
{
networkConfiguration.StaticVirtualNetworkIPAddress = network.Ip;
}
if (network.SubnetNames != null)
{
List<string> names = network.SubnetNames.ToList();
networkConfiguration.SubnetNames = names;
}
if (network.Endpoints != null)
{
foreach (EndpointSetting endpoint in network.Endpoints)
{
InputEndpoint inputEndpoint = new InputEndpoint {
Port = endpoint.Port,
LocalPort = endpoint.LocalPort,
EnableDirectServerReturn = endpoint.EnableDirectServerReturn,
Protocol = endpoint.Protocol
};
if (endpoint.AclRules != null)
{
inputEndpoint.EndpointAcl = new EndpointAcl {
Rules = endpoint.AclRules.Select(rule =>
new AccessControlListRule {
Action = rule.Action,
Description = rule.Description,
Order = rule.Order,
RemoteSubnet = rule.RemoteSubnet}).ToList()
};
}
if (!string.IsNullOrWhiteSpace(endpoint.Name))
{
inputEndpoint.Name = endpoint.Name;
}
if (!string.IsNullOrWhiteSpace(endpoint.LoadBalancedEndpointSetName))
{
inputEndpoint.LoadBalancedEndpointSetName = endpoint.LoadBalancedEndpointSetName;
}
if (!string.IsNullOrWhiteSpace(endpoint.LoadBalancerName))
{
inputEndpoint.LoadBalancerName = endpoint.LoadBalancerName;
}
if (endpoint.LoadBalancerProbe != null)
{
inputEndpoint.LoadBalancerProbe =
new Microsoft.WindowsAzure.Management.Compute.Models.LoadBalancerProbe {
IntervalInSeconds = endpoint.LoadBalancerProbe.IntervalInSeconds,
Path = endpoint.LoadBalancerProbe.Path,
Port = endpoint.LoadBalancerProbe.Port,
Protocol = endpoint.LoadBalancerProbe.Protocol == "http" ? LoadBalancerProbeTransportProtocol.Http : LoadBalancerProbeTransportProtocol.Tcp,
TimeoutInSeconds = endpoint.LoadBalancerProbe.TimeoutInSeconds
};
}
networkConfiguration.InputEndpoints.Add(inputEndpoint);
}
}
}

Here we setup the subnet, ip, endpoints - with endpoint security and possibly load balancer with a probe.

The End

That is basically what we did for the Azure part. The docker container deployment is another story.

Idea behind this blog

This blog is for documenting my learning path and experiments with code. It is for doing all those things that cannot fit into the work day. It is also my way of reaching out for getting your take on things - should I be so lucky to get comments on a post.

Currently I am working fulltime as a consultant with cloud web solutions as a fullstack developer. Hopefully this might lead to blogs about Windows Azure, Docker, .Net, Javascript, Css/Svg/Less/Html, Patterns, Testing, Different kind of databases and other stuff.