This is a reference document created for using .Net Core in RESTful services. Some common practices and concepts are discussed here along with a sample project below. The project is a basic web api with minimal services implemented. It was created to be used as a template for multiple RESTful service projects that were to be created from it. Each of these services were put into their own containers (Docker) to construct an application using microservice architecture. See blog post here for the containerization side of this project.
http://solidfish.com/dotnet-core-with-docker-revisited/
The project itself can be found at github here:
https://github.com/johnlee/WidgetsCoreAPI
REST (Representational State Transfer) is an architectural style that is protocol agnostic, though almost always uses HTTP in practice. It also incorporates standards such as those defined by the communication protocol. The architectural style encourages separation of data from the representation layer. A way to analyze the maturity of a REST API is to use the Richardson Maturity Model. This model has the following levels:
- Level 0 – The transport protocol is used as sole communication; use of single URI only; example – HTTP POST to do both retrieval and update of data; queries and data are stored in the transport protocol message headers or body.
- Level 1 – Expands level 0 such that each resource is mapped to a different URI
- Level 2 – Expands level 1 such that use of HTTP verbs (GET, POST, DELETE)
- Level 3 – Expands level 2 to use Hypermedia as the Engine of Application State (HATEOAS)
- Each response from the server includes URI for more details
- This allows discover-ability
The following structures are part of this contract
- Resource Identifier (URI)
- Use Nouns (api/authors, api/authors/{authorId})
- Use Hierarchy (api/authors/{authorId}/books)
- Filters/Sorts are not resources so should not be in the name (api/authors?name=abc)
- RPC-style calls that dont map easily are try-your-best naming (api/authors/{authorId}/totalNumberPages)
- HTTP Method
- Payload
Below are some common best practices when working with RESTful APIs. Further below is a sample project that implements some of these best practices using .Net Core 2.
REST API Responses
Whenever working with RESTful API we should use the appropriate HTTP response codes whether it be successful executions or errors. Furthermore, we should include descriptors that the consumer or client could use to address, confirm or respond to the server. Some common response codes are:
- 100s = Information
- 200s = Success
- 201 = created
- 202 = accepted
- 204 = no content (PUT or DELETE)
- 300s = Redirection or additional information required
- 400s = Client side errors
- 400 = bad request
- 401 = unauthorized
- 403 = forbidden
- 404 = not found
- 405 = not allowed (see Allow header)
- 415 = unsupported (see Content-Type header)
- 500s = Server side errors
- 501 = not implemented
- 503 = not available
- 504 = timeout
See the reference link below on best practices for each of these response codes and what metadata (if any) are to follow it.
In .Net Core there are many built-in functions on the ControllerBase class for some of these response codes. These are listed below:
return Ok(); // Http status code 200 return Created(); // Http status code 201 return NoContent(); // Http status code 204 return BadRequest(); // Http status code 400 return Unauthorized(); // Http status code 401 return Forbid(); // Http status code 403 return NotFound(); // Http status code 404
.Net Core also allows explicit response codes to be returned as shown below.
return StatusCode(405); return StatusCode(Microsoft.AspNetCore.Http.StatusCodes.Status405MethodNotAllowed);
As shown above, we can see the full list of Http.StatusCodes definitions in the StatusCodes.cs class as shown here:
A common practice is to rely heavily on the response messages instead of specific response codes. In this practice we limit the response codes to handful of generic codes and let the consumer address the response based on the accompanying message. For example, we could use only the 200 success code for anything that completes execution (POST, PUT, PATCH, etc) but following the code is the message containing details about that success. Likewise, we could have two error codes, 400 indicated client side error and 500 for server side error. Both of these would be accompanied by messages detailing the error.
Principle of Deferred Execution
When using queries, we should do execution at the latest time possible. This avoids data inconsistencies and keeps consumers with the most up-to-date data values.
This principle can be implemented when using Repository Pattern by using IQueryable return types. IQueryable does not execute the query. The execution occurs at iteration time, which could be when executing through a loop or calling methods like ToList(). Refer to my post on Repository Pattern for more information:
http://solidfish.com/repository-and-factory-pattern/
We can also apply the Defered Execution Principle when working with sorts, filters and paging. When working with these functions, ensure the method calls are derivatives of the IQueryable type. Some examples of these type of methods are:
- OrderBy
- ThenBy
- Take
- Split
Improving Performance on Collection Resources
Whenever working with collections (data sets) we need to consider performance. API performance is directly impacted by the returning data set size. Strategies of improving this performance are by the following:
- Paging
- Filtering
- Searching
- Shaping
Paging
When using paging, we can include pagination information in the response’s header. We can do this by creating a custom header ‘X-Pagination’. This custom header would be defined in class we create called PagedList, which contains all the paging metadata.
public class PagedList : List { public int CurrentPage { get; private set; } public int TotalPages { get; private set; } public int PageSize { get; private set; } public int TotalCount { get; private set; } ... // other methods that sets up these metadata fields...
This class will be used as the return type by the repository. Since we will be returning page information on the responses, we also need access to the urls since the page will be part of the url. We can do this with a custom UrlHelper class. This method is similar to how we tracked Urls in traditional ASP.NET MVC. This is configured in the startup.cs class.
// URL Helper using context services.AddSingleton<IActionContextAccessor, ActionContextAccessor>(); services.AddScoped<IUrlHelper>(implementationFactory => { var actionContext = implementationFactory.GetService<IActionContextAccessor>() .ActionContext; return new UrlHelper(actionContext); });
Filtering
Filters limits the collection result set by taking certain predicates. The example below limits the result set to a specific value of a specific field.
http://mysite/api/authors?genre=horror
Another option or technique of using filtering is to allow complex query formats such as supporting specific verbs. In the example below the client is requesting a filter on the ‘genre’ field but using the ‘contain’ method to indicate how the filter is to be applied. Other verbs that could be used are ‘exclude’, ‘startswidth’, ‘endswidth’, etc.
http://mysite/api/authors?genre=contains('horror')
Searching
Searching differs from filters as its predicate could be applied across multiple or all fields. In the case where the client needs to search for a specific field value, this would fall under the filtering option shown above.
http://mysite/api/authors?search=The King
Shaping
Data shaping is a strategy that allows consumers to specifically call out fields that they want. This can be given in an option field like shown below. Allowing data shaping can improve performance as it reduces the data set size and possibly query run time if the fields being omitted reduces dependencies.
http://mysite/api/authors?fields=name,age
Other options we can support for shaping are expansions. Expansions are when we call out specific fields that are to be expanded upon response. For example, on the shape option below the client is requesting that the books section of the response be expanded to include all details.
http://mysite/api/authors?expand=books
Sorting
Though not directly related to performance another function APIs often need to support is sorting. Sorting can impact performance if client-side sorting is unavailable or slow. In this case we want our APIs to support server-side sorting. As with the functions discussed above, when working with server-side sorting we still want to practice the Principle of Deferred Execution. Since the sort execution is an iteration, this means we should perform the sort at the end of our queries. When sorting a collection we need to consider the following:
- Sorting by field(s)
- Sorting direction (ascending / descending)
These sort types can be supported if the API accepts a ‘sortby’ or ‘orderby’ field along with a descriptor on the sort direction. For example:
http://mysite/api/authors?orderby=name desc
Idempotency
Idemotence is the property of certain operations in mathematics and computer science that can be applied multiple times without changing the result beyond the initial application. In other words – when the action is repeated the data does not change.
HTTP GET is idempotent – every time it is called, the same data is returned (assuming it is not changed by another).
HTTP PUT is idempotent – put does a full replace and therefore it will result in the same data at the end.
HTTP DELETE is not idempotent – after the initial delete there is nothing else to delete again.
HTTP POST is not idempotent – the POST would be a new record so subsequent POST will result in more new records.
JSON Patch Format
Traditionally we would use HTTP PUT methods for updating records. But this can be risky as it is an idempotent action and it completely replaces the record. Therefore, the client could accidentally modify fields that were not intended to be updated or in the case of multiple requests happening closely – clients could be reverting changes to modified fields. To avoid this risk, we can use the HTTP PATCH method which is to update only specific parts of the record. To do this we have the JSON patch format. This format is standardized and documented in RFC 6902. It allows specific update operations such as:
- add
- remove
- replace
- move
- copy
Examples of this operations are shown below. Given this JSON:
{
"biscuits": [
{ "name": "Digestive" },
{ "name": "Choco Leibniz" }
]
}
Add
{ "op": "add", "path": "/biscuits/1", "value": { "name": "Ginger Nut" } }
Adds a value to an object or inserts it into an array. In the case of an array, the value is inserted before the given index. The -
character can be used instead of an index to insert at the end of an array.
Remove
{ "op": "remove", "path": "/biscuits" }
Removes a value from an object or array.
{ "op": "remove", "path": "/biscuits/0" }
Removes the first element of the array at biscuits
(or just removes the “0” key if biscuits
is an object)
Replace
{ "op": "replace", "path": "/biscuits/0/name", "value": "Chocolate Digestive" }
Replaces a value. Equivalent to a “remove” followed by an “add”.
Copy
{ "op": "copy", "from": "/biscuits/0", "path": "/best_biscuit" }
Copies a value from one location to another within the JSON document. Both from
and path
are JSON Pointers.
Move
{ "op": "move", "from": "/biscuits", "path": "/cookies" }
Moves a value from one location to the other. Both from
and path
are JSON Pointers.
Test
{ "op": "test", "path": "/best_biscuit/name", "value": "Choco Leibniz" }
Tests that the specified value is set in the document. If the test fails, then the patch as a whole should not apply.
HATEOAS
Pronounced: Hate-oh-es
Hypermedia as the Engine of Application State
Makes the API be evolve-able and self-describable. It is used to show how the consume can be consumed. It reduces the work and knowledge needed on the consumer/client side. It is standardized so that consumers will be able to automatically read and understand the resources.
The API server provides information to client so that all endpoints are exposed and discover-able by the client. This can be done using the ‘rel’ attribute (relation), which is often found in the HTML for css links:
<link rel='stylesheet' href='bootstrap/dist/boostrap.css'> <a href='uri' rel='type' type='media type'>
From the above we have
- href: contains the uri
- rel: describes how the link relates to the resource
- type: describes the media type
Using this in API responses standardizes the message so that the client will be able to easily determine where to find the links. Example:
{ "id": 1, "author": "lee", "created": "2015-01-01T12:00:00.000", "links": [ { "href": "api/author/1", "rel": "self" }, { "href": "api/author/1/comments", "rel": "comments" }, { "href": "api/author/1/books", "rel": "books" } ] }
Implementing HATEOAS can be done in the following ways:
- Static
- Use a base class that contains links
- Each model would inherit from this base class per resource
- Dynamic
- Use of anonymous types and ExpandoObjects
- Dynamically adds links to the ExpandoObject
Custom Media Types
Some vendors use custom media types to handle things like HATEOAS. For example
Accept: application/vnd.marvin.hateoas+json
When the client submits the above header accept type, the server responds with the correct HATEOAS information. This example can be furthered by having custom media types defining the model that is being sent in requests. For example, when GET or POST an author object, instead of having ‘application/json’ in the header as the content-type, we could be more specific to something like:
Content-Type: application/vnd.marvin.author.full+json
Note that in the example above we are still using custom vendor identifiers and denoting the ‘author’ object type that is being send.
Swagger
Like HATEOS, Swagger is a tool available to help document and describe API endpoints. Whereas HATEOS puts URI in it’s responses, Swagger is documentation that is provided at the server site and not in the responses. Swagger is often beneficial in the development phase of a client since there is documentation about the available endpoints. Also, since Swagger provides this documentation, it is often used as the source documentation since it evolves with the source code.
Versioning
There are many factors that could drive change for an API and thereby changing the result structures. Some common ways of versioning APIs are:
- URI
- api/v1/authors
- Query String Parameters
- api/authors?version=1
- Custom Headers
- api-version: 1
- application/vnd.marvin.author.full.v1+json (custom media type)
- Code on Demand
- API returns javascript which the client runs. This code contains endpoint information as well as the objects being used.
There is a forth option to versioning – don’t. Some organizations do not version APIs and only create new endpoints when resources are changed.
Caching
Every response from an API should define whether it should be cached or not. .NET Core supports this out of the box with HTTP Caching. The cache is a separate component from the API where it stores responses if they are deemed cachable. It sits in the middle between the client-server / request -response. There are 3 types of caches.
- Client/Private Cache (browser side)
- Gateway/Shared Cache / Reserved (shared at gateway)
- Proxy/Shared Cache (shared on the network)
As of this post .Net Core does not have an out-of-the-box cache store. So to implement full caching we need to use a third party cache store.
Expiration Model
Expiration model is using time to determine when cache values are used. It uses an expiration and age value to determine when the cache value needs to be renewed.
Validation Model
Validation model uses logic to determine when cache values are to be used. There are two types of validation techniques. The first is the use of an ETag (entity tag) in the response header which uniquely identifies that response. This can be used to determine if a body or header has changed. Another technique is to fetch cache only when specific parts of the response have changed. This often still uses an ETag. The main difference between Expiration vs Validation model is the use of ETags, which is handled by the server. Therefore validation still requires a full connection to the server whereas some expiration techniques (client side) completely eliminates trip to server.
Cache Headers
Regardless of caching techniques, cache configuration is often handled using cache control headers.
Cache Store
Depending on the caching technique there is a cache store where the data is actually stored.
ETag
Generated on the server side as a unique identifier for the response content. It is included in the response header. The ETag is used to determine if the body content is up-to-date or not. It is transmitted between client/server. If the ETag is still active (any change will generate new ETag) then the server responds with a 304 (Not Modified) in which case the client would reuse the previous response’s content body.
Concurrency
Concurrency is the handling of multiple client requests concurrently. There are two ways to handle concurrency.
- Pessimistic concurrency
- Resource is locked
- while locked no modifications by other clients
- This is not possible with REST
- Optimistic concurrency
- Token (usually an ETag) is returned with resource
- Update can happen as long as token is valid
With REST APIs we see optimistic concurrency checking only with the use of ETags. When doing a change (post/put/patch) the server will check the ETag to ensure it is up-to-date. If it is not (the backend data has changed) then a 412 (Precondition Failed) will be returned to the client. This indicates that the client should re-fetch the record first.
Project Setup
This is a sample project using some of the concepts discussed above. We are using the .NET CLI to setup this project. The following commands setup the initial structure.
$ dotnet new webapi Getting ready... The template "ASP.NET Core Web API" was created successfully. This template contains technologies from parties other than Microsoft, see https://aka.ms/template-3pn for details. Processing post-creation actions... Running 'dotnet restore' on C:\demo\demo.csproj... Restoring packages for C:\demo\demo.csproj... Restoring packages for C:\demo\demo.csproj... Restore completed in 1.34 sec for C:\demo\demo.csproj. Generating MSBuild file C:\demo\obj\demo.csproj.nuget.g.props. Generating MSBuild file C:\demo\obj\demo.csproj.nuget.g.targets. Restore completed in 2.18 sec for C:\demo\demo.csproj. Restore succeeded. $ dotnet run Hosting environment: Production Content root path: C:\demo Now listening on: http://localhost:5000 Application started. Press Ctrl+C to shut down.
EntityFrameworkCore
For .NET Core 2 there is the Microsoft.EntityFrameworkCore 2 available. This can be installed through NuGet
$ dotnet add package Microsoft.EntityFrameworkCore Writing C:\Users\me\AppData\Local\Temp\tmpFF04.tmp info : Adding PackageReference for package 'Microsoft.EntityFrameworkCore' into project 'C:\demo\demo.csproj'. log : Restoring packages for C:\demo\demo.csproj... info : GET https://api.nuget.org/v3-flatcontainer/microsoft.entityframeworkcore/index.json info : OK https://api.nuget.org/v3-flatcontainer/microsoft.entityframeworkcore/index.json 316ms info : GET https://api.nuget.org/v3-flatcontainer/microsoft.entityframeworkcore/2.1.1/microsoft.entityframeworkcore.2.1.1.nupkg ... info : OK https://api.nuget.org/v3-flatcontainer/microsoft.extensions.configuration.abstractions/2.1.1/microsoft.extensions.configuration.abstractions.2.1.1.nupkg 229ms log : Installing Microsoft.Extensions.Configuration.Abstractions 2.1.1. log : Installing System.Memory 4.5.1. log : Installing System.Runtime.CompilerServices.Unsafe 4.5.1. log : Installing Microsoft.Extensions.Configuration 2.1.1. log : Installing Microsoft.Extensions.Primitives 2.1.1. log : Installing Microsoft.Extensions.Configuration.Binder 2.1.1. log : Installing Microsoft.Extensions.Logging.Abstractions 2.1.1. log : Installing Microsoft.Extensions.Options 2.1.1. log : Installing Microsoft.Extensions.Caching.Abstractions 2.1.1. log : Installing Microsoft.EntityFrameworkCore 2.1.1. log : Installing System.Diagnostics.DiagnosticSource 4.5.0. log : Installing Microsoft.EntityFrameworkCore.Abstractions 2.1.1. log : Installing Microsoft.Extensions.DependencyInjection.Abstractions 2.1.1. log : Installing Microsoft.Extensions.Logging 2.1.1. log : Installing Microsoft.Extensions.Caching.Memory 2.1.1. log : Installing Microsoft.Extensions.DependencyInjection 2.1.1. log : Installing Remotion.Linq 2.2.0. log : Installing System.ComponentModel.Annotations 4.5.0. log : Installing System.Collections.Immutable 1.5.0. log : Installing Microsoft.EntityFrameworkCore.Analyzers 2.1.1. warn : Package 'EntityFramework 6.2.0' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v2.0'. This package may not be fully compatible with your project. info : Package 'Microsoft.EntityFrameworkCore' is compatible with all the specified frameworks in project 'C:\demo\demo.csproj'. info : PackageReference for package 'Microsoft.EntityFrameworkCore' version '2.1.1' added to file 'C:\demo\demo.csproj'.
Observe the project source files on the github link above for more information. The project contains a couple of controllers and though we are using EntityFrameworkCore, there is no actual database connection. Instead we’re using mock data only. Another library we’re using from Nuget is Automapper, discussed further below.
Models – AutoMapper
When working with entity models and view models we often use a model factory to map the different object types. In .NET Core there is a new class called the AutoMapper that can automatically do this, thereby replacing the model factory. First we install the nuguet package using the dotnet CLI.
$ dotnet add package AutoMapper Writing C:\Users\me\AppData\Local\Temp\tmp68B4.tmp info : Adding PackageReference for package 'AutoMapper' into project 'C:\demo\demo.csproj'. log : Restoring packages for C:\demo\demo.csproj... info : GET https://api.nuget.org/v3-flatcontainer/automapper/index.json info : OK https://api.nuget.org/v3-flatcontainer/automapper/index.json 237ms info : GET https://api.nuget.org/v3-flatcontainer/automapper/7.0.1/automapper.7.0.1.nupkg info : OK https://api.nuget.org/v3-flatcontainer/automapper/7.0.1/automapper.7.0.1.nupkg 233ms info : GET https://api.nuget.org/v3-flatcontainer/system.valuetuple/index.json info : OK https://api.nuget.org/v3-flatcontainer/system.valuetuple/index.json 228ms info : GET https://api.nuget.org/v3-flatcontainer/system.valuetuple/4.5.0/system.valuetuple.4.5.0.nupkg info : OK https://api.nuget.org/v3-flatcontainer/system.valuetuple/4.5.0/system.valuetuple.4.5.0.nupkg 232ms log : Installing System.ValueTuple 4.5.0. log : Installing AutoMapper 7.0.1. info : Package 'AutoMapper' is compatible with all the specified frameworks in project 'C:\demo\demo.csproj'. info : PackageReference for package 'AutoMapper' version '7.0.1' added to file 'C:\demo\demo.csproj'.
In order to use the AutoMapper we must define the projections in the startup.cs class under the Configure method.
AutoMapper.Mapper.Initialize(cfg => { cfg.CreateMap<Data.AuthorDb, Models.Author>() // Projections: .ForMember(dest => dest.Name, opt => opt.MapFrom(src => $"{src.FirstName} {src.LastName}")) .ForMember(dest => dest.Age, opt => opt.MapFrom(src => DateTime.Today.Year - src.DateOfBirth.Year)); });
In the example below we have two Author models getting mapped using the AutoMapper.
[HttpGet] public IActionResult Get() { var authorsdb = _repository.GetAuthors(); var authors = Mapper.Map<IEnumerable<Author>>(authorsdb); return Ok(authors); } [HttpGet("{id}")] public IActionResult GetAuthor(Guid id) { var authordb = _repository.GetAuthor(id); if (authordb == null) { return NotFound(); } var author = Mapper.Map<Author>(authordb); return Ok(author); }
Global Exception Handling
The global exception handler can be set on the Startup.cs class. This sample code below shows the developer exception page, which contains details about the exception including stack trace, only in Development Mode. In production mode the generic HTTP 500 will be returned with a generic error message.
if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseExceptionHandler(appBuilder => { appBuilder.Run(async context => { context.Response.StatusCode = 500; await context.Response.WriteAsync("An unexpected fault happened. Try again later."); }); }); }
Content Negotiation and Content Formats
On incoming requests if the ‘Accept’ header is given, the application should respond with that media type. By default .NET Core will use JSON. This is configured in the Startup.cs class. The example below shows that the application is now supporting an ‘Accept’ request of ‘application/xml’ media type by adding newXmlDataContractSerilizerOutputFormatter. Now if the requester/browser sends an ‘Accept’ header of xml, an XML version will be sent instead of JSON. On the input side, we can also configure handling of different input types or formats. In the example below, we are able to handle XML input types.
using Microsoft.AspNetCore.Mvc.Formatters; ... services.AddMvc(setupAction => { setupAction.ReturnHttpNotAcceptable=true; setupAction.OutputFormatters.Add(new XmlDataContractSerializerOutputFormatter()); setupAction.InputFormatters.Add(new XmlDataContractSerializerInputFormatter()); });
Custom Model Binders
Custom model binders can be created by extending the IModelBinder interface. Example below
public class ArrayModelBinder : IModelBinder { public Task BindModelAsync(ModelBindingContext bindingContext) { // Our binder works only on enumerable types if (!bindingContext.ModelMetadata.IsEnumerableType) { bindingContext.Result = ModelBindingResult.Failed(); return Task.CompletedTask; } // Get the inputted value through the value provider var value = bindingContext.ValueProvider .GetValue(bindingContext.ModelName).ToString(); // If that value is null or whitespace, we return null if (string.IsNullOrWhiteSpace(value)) { bindingContext.Result = ModelBindingResult.Success(null); return Task.CompletedTask; } // The value isn't null or whitespace, // and the type of the model is enumerable. // Get the enumerable's type, and a converter var elementType = bindingContext.ModelType.GetTypeInfo().GenericTypeArguments[0]; var converter = TypeDescriptor.GetConverter(elementType); // Convert each item in the value list to the enumerable type var values = value.Split(new[] { "," }, StringSplitOptions.RemoveEmptyEntries) .Select(x => converter.ConvertFromString(x.Trim())) .ToArray(); // Create an array of that type, and set it as the Model value var typedValues = Array.CreateInstance(elementType, values.Length); values.CopyTo(typedValues, 0); bindingContext.Model = typedValues; // return a successful result, passing in the Model bindingContext.Result = ModelBindingResult.Success(bindingContext.Model); return Task.CompletedTask; } }
Model Validation
We can set model validation in a few ways. First, we use data annotations on the incoming models. Example below for author name:
[Required(ErrorMessage = "Author name is required")] [MaxLength(100, ErrorMessage = "Author name cannot be more than 100 characters.")] public string Name { get; set; }
Next we can create a custom ObjectResult class that can handle all model validation errors. The following handler will return a 422 error code with a description of the error in a standard format.
public class UnprocessableEntityObjectResult : ObjectResult { public UnprocessableEntityObjectResult(object modelState) : base(modelState) { if (modelState == null) { throw new ArgumentNullException(nameof(modelState)); } StatusCode = 422; } }
Finally on the controller we can use these different types of validation checks to handle the incoming model.
if (author == null) { return BadRequest(); } if (author.Age < 1 || author.Age > 999) { ModelState.AddModelError(nameof(Author), "Invalid author age"); } if (!ModelState.IsValid) { return new UnprocessableEntityObjectResult(ModelState); } var authordb = _repository.GetAuthor(author.Id); if (authordb == null) { return NotFound(); }
Logging using NLog
Install NLog.Web.AspNetCore (check version for dotnet core 2 support) from NuGet. NLog requires an nlog.config file at the root. This file contains config information about the logger. Last we update the logger factory to use NLog as it’s provider. This is done in the startup.cs file.
//loggerFactory.AddProvider(new NLog.Extensions.Logging.NLogLoggerProvider()); loggerFactory.AddNLog();
Important Note: Starting in .Net Core 2 we are allowed to define the logger provider earlier the application process. We can set this in the Program.cs file so that logging is available before the startup even fires. We define this in the program file as so:
public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseNLog() .Build();
Rate Limiting
To protect APIs from being overwhelmed with concurrent requests, we can use rate limiting. There are third party tools we can use for this. The example below we use IpRateLimitOptions. This allows rules to be defined on how to limit the requests. It can be set across the application or specific endpoints. The example below shows how we limit 1000 request per every 5 minutes and 200 requests per every 10 seconds.
services.AddMemoryCache(); services.Configure((options) => { options.GeneralRules = new System.Collections.Generic.List() { new RateLimitRule() { Endpoint = "*", Limit = 1000, Period = "5m" }, new RateLimitRule() { Endpoint = "*", Limit = 200, Period = "10s" } }; }); services.AddSingleton<IRateLimitCounterStore, MemoryCacheRateLimitCounterStore>(); services.AddSingleton<IIpPolicyStore, MemoryCacheIpPolicyStore>();
Root Help Document
Some API sites will have a root document that describes the API endpoints available. This can be done using third-party helper or creating a custom controller that handles the root level. This root page would be used by clients for discovery so it will need to contain appropriate information for all the endpoints. Also, the root document can contain information about endpoint versions.
One of the best tools for supporting help documents is Swagger OpenAPI. It is commonly used with .Net Core WebAPI projects.
We install Swagger from the NuGet package manager – Swashbuckle.AspNetCore
On the startup.cs class we need to first configure swagger service as follows:
// Swagger services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new Info { Title = "Widgets API", Description = ".Net Core 2 REST API Sample", Version = "v1" }); var xmlPath = System.AppDomain.CurrentDomain.BaseDirectory + @"WidgetsCoreApi.xml"; c.IncludeXmlComments(xmlPath); });
Next we add it to the application pipeline in the Configure method as follows:
// Swagger Setup app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "Widgets API"); c.RoutePrefix = ""; });
References
.Net Core Tutorial
https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-web-api
Building RESTful API
https://app.pluralsight.com/library/courses/asp-dot-net-core-restful-api-building
REST API Guide
https://restfulapi.net/http-status-codes/
JSON Patch Format
http://jsonpatch.com
.NET Core Unit Testing EF
http://gunnarpeipman.com/testing/aspnet-core-ef-inmemory/
Asynchronous Calls in .NET Core EF
https://app.pluralsight.com/library/courses/play-by-play-converting-to-asynch-calls
NLog for .Net Core 2
https://github.com/NLog/NLog.Web/wiki/Getting-started-with-ASP.NET-Core-2
Automated Testing
https://app.pluralsight.com/library/courses/automated-testing-end-to-end
.NET Core Deployment with Docker
https://app.pluralsight.com/library/courses/deployment-pipeline-aspdotnet-core-docker
Wikipedia HATEOAS
https://en.wikipedia.org/wiki/HATEOAS
Idempotence
https://www.restapitutorial.com/lessons/idempotency.html