
Applying Pain Driven Development to Patterns
10/01/18 • 8 min
Applying Pain Driven Development to Patterns
This week we talk about specific ways you can apply my strategy of Pain Driven Development to the use of design patterns. This is an excerpt from my Design Pattern Mastery presentation that goes into more detail on design patterns.
Sponsor - DevIQ
Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.
Show Notes / Transcript
I talked about Pain Driven Development, or PDD, in episode 10 - check out that episode first if you're not familiar with the practice. I've recently been focusing a bit on some design patterns. An easy trap to fall into with design patterns is trying to apply them too frequently or too soon. PDD suggests waiting to experience pain while trying to work with the application's current design before you attempt to refactor to improve its design by applying a design pattern. In this tip, I'll walk through a few common steps where applying a specific pattern may be helpful.
To begin, let's assume we have a very simple web application. Let's say it's using MVC, and there's a controller that needs to be used to return some data fetched from a database. It could be an API endpoint or a view-based page - the UI format isn't important in this case. The absolute simplest thing you can do in this situation is hard code your data access code into your controller. So, assuming you're using ASP.NET Core and Entity Framework Core, you could instantiate an DbContext in the controller and use that to fetch the data. This works and meets the immediate requirement, so you ship this version.
A little bit later, your application has grown more complex. You have some filters that also use data, along with other services. You start to notice occasional bugs from EF and realize that you've introduced a bug. By instantiating a new DbContext in each controller, but occasionally passing around entities between parts of the application, EF gets in a state where entities are tracked by one instance but you're trying to operate on them with another instance of DbContext. You need to use a single EF Core DbContext per web request, which is to say it should have a "Scoped" lifetime. Fortunately, ASP.NET Core makes it very easy to achieve this by configuring your DbContext inside of ConfigureServices. In fact, if you don't read the docs, you probably don't even know what lifetime EF Core is using, because it's hidden within an extension method. In any case, once you configure DbContext in ConfigureServices, you need a way to get it into your Controller(s). To do this requires the Strategy pattern, covered in episode 19. If you're familiar with dependency injection, you've used the Strategy pattern. Add a constructor to your Controller, pass in the DbContext, and set a private local field with the value passed into the constructor. Do this anywhere you're otherwise newing up the DbContext. Remind yourself 'new is glue'. You just fixed an issue with too tight of coupling to the instantiation process by using the service collection built into ASP.NET Core, an IOC container, essentially a factory on steroids. Your EF Core lifetime bug is now fixed, so you ship the code.
Some more time passes, the application has grown, and now there are a bunch of controllers and other places that all have DbContext injected into them. You've noticed some duplication in how code works with the DbContext. You've also found that it's tough to unit test your classes that have a real DbContext injected, except by configuring EF Core to use its In Memory data store. This works, but you'd prefer it if your unit tests truly had no dependencies so you could just test behavior, not low-level data access libraries. You decide that you can solve both of these problems by introducing the Repository pattern, which is just a fancy name for an abstraction used to encapsulate the low level details of your data access. You create a few such interfaces, implement them with DbContext, and make sure your Controllers and other classes that were directly using DbContext now have an interface injected instead. Along the way you fix a couple of bugs you discovered that had grown due to duplicate code that had evolved differently, but which should have remained consistent. When you're done, the only types that know about DbContext directly are your concrete Repository implementations.
Your application is growing more popular now, and some of the pages are really hammering the database. Their data doesn't change very often, so you decide to add some caching. Initially you start putting the caching logic directly in your data access code in your repository implementations that use EF Core, but you quickly find that there is a lot of ...
Applying Pain Driven Development to Patterns
This week we talk about specific ways you can apply my strategy of Pain Driven Development to the use of design patterns. This is an excerpt from my Design Pattern Mastery presentation that goes into more detail on design patterns.
Sponsor - DevIQ
Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.
Show Notes / Transcript
I talked about Pain Driven Development, or PDD, in episode 10 - check out that episode first if you're not familiar with the practice. I've recently been focusing a bit on some design patterns. An easy trap to fall into with design patterns is trying to apply them too frequently or too soon. PDD suggests waiting to experience pain while trying to work with the application's current design before you attempt to refactor to improve its design by applying a design pattern. In this tip, I'll walk through a few common steps where applying a specific pattern may be helpful.
To begin, let's assume we have a very simple web application. Let's say it's using MVC, and there's a controller that needs to be used to return some data fetched from a database. It could be an API endpoint or a view-based page - the UI format isn't important in this case. The absolute simplest thing you can do in this situation is hard code your data access code into your controller. So, assuming you're using ASP.NET Core and Entity Framework Core, you could instantiate an DbContext in the controller and use that to fetch the data. This works and meets the immediate requirement, so you ship this version.
A little bit later, your application has grown more complex. You have some filters that also use data, along with other services. You start to notice occasional bugs from EF and realize that you've introduced a bug. By instantiating a new DbContext in each controller, but occasionally passing around entities between parts of the application, EF gets in a state where entities are tracked by one instance but you're trying to operate on them with another instance of DbContext. You need to use a single EF Core DbContext per web request, which is to say it should have a "Scoped" lifetime. Fortunately, ASP.NET Core makes it very easy to achieve this by configuring your DbContext inside of ConfigureServices. In fact, if you don't read the docs, you probably don't even know what lifetime EF Core is using, because it's hidden within an extension method. In any case, once you configure DbContext in ConfigureServices, you need a way to get it into your Controller(s). To do this requires the Strategy pattern, covered in episode 19. If you're familiar with dependency injection, you've used the Strategy pattern. Add a constructor to your Controller, pass in the DbContext, and set a private local field with the value passed into the constructor. Do this anywhere you're otherwise newing up the DbContext. Remind yourself 'new is glue'. You just fixed an issue with too tight of coupling to the instantiation process by using the service collection built into ASP.NET Core, an IOC container, essentially a factory on steroids. Your EF Core lifetime bug is now fixed, so you ship the code.
Some more time passes, the application has grown, and now there are a bunch of controllers and other places that all have DbContext injected into them. You've noticed some duplication in how code works with the DbContext. You've also found that it's tough to unit test your classes that have a real DbContext injected, except by configuring EF Core to use its In Memory data store. This works, but you'd prefer it if your unit tests truly had no dependencies so you could just test behavior, not low-level data access libraries. You decide that you can solve both of these problems by introducing the Repository pattern, which is just a fancy name for an abstraction used to encapsulate the low level details of your data access. You create a few such interfaces, implement them with DbContext, and make sure your Controllers and other classes that were directly using DbContext now have an interface injected instead. Along the way you fix a couple of bugs you discovered that had grown due to duplicate code that had evolved differently, but which should have remained consistent. When you're done, the only types that know about DbContext directly are your concrete Repository implementations.
Your application is growing more popular now, and some of the pages are really hammering the database. Their data doesn't change very often, so you decide to add some caching. Initially you start putting the caching logic directly in your data access code in your repository implementations that use EF Core, but you quickly find that there is a lot of ...
Previous Episode

How Do You Even Know This Crap?
How Do You Even Know This Crap?
This week we have a special guest offering a dev tip - please welcome Scott Hanselman who blogs at Hanselman.com and has a great long-running podcast, Hanselminutes. Scott's going to share with us some tips on how you can leverage your experience to know when a problem you're facing should already have a solution somewhere. Here's Scott.
Sponsor - DevIQ
Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.
Show Notes / Transcript
You can view Scott's article on this topic called How Do You Even Know This Crap on his site.
Show Resources and Links
Next Episode

Shared Kernel as a Package
Shared Kernel as a Package
Code shared between applications within an organization is typically referred to as a shared kernel in domain-driven design. This week's tip discusses this approach and how best to do the sharing.
Sponsor - DevIQ
Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.
Show Notes / Transcript
If you've written more than one application, or worked for a company that has more than one, you've probably shared code between the applications. There are a variety of approaches to this, one of the most awful being the One Solution To Rule Them All approach, in which every bit of code ever written is added to a single code repository and a single solution file. One or more projects in this solution become the shared projects used by many different applications. The benefit of this approach is that developers can easily view and even debug all of the code possibly used by anything. Changes that might break dependent projects are often discovered quickly. However, if a single project requires an update to shared code, it's not easy to have one project depend on a different version of the shared library than another. Even if you use more than one solution, if you're sharing code between multiple solutions at the file system level, you're probably in this boat.
In Domain-Driven Design, the Shared Kernel is code that more than one bounded context depends on. The contract between the shared kernel code and its dependents is that the shared kernel code doesn't change unless all downstream dependencies agree with the change. Often it's one team maintaining the shared kernel and its dependent projects, in which case this is pretty easy, but in larger organizations there may be an approval process involving several teams. When updates do occur, they should be decoupled from dependencies such that they can pull in the update when they're ready. This enables updating the shared kernel code without having to test and update every downstream dependency immediately.
In .NET, one way to gain the ability to have dependent projects pull in the latest updates to the shared kernel whenever they're ready is to use a Nuget package. Any time an update is made to the shared kernel, its package should be updated and its version updated. For example, you might initially have Acme.SharedKernel version 1.0.0, which two projects reference. Project A needs additional functionality, and it's agreed to place it in the shared kernel. A new package is published, with version 1.0.1. Project A updates its version of the package to require 1.0.1 and is able to be deployed. Project B continues to depend on version 1.0.0 and can continue with development and/or remain deployed using this version. Project B can choose when and how often to update which version of the shared kernel package it uses.
If you follow this approach, there are a few things that you may find helpful. First, use continuous integration for your shared kernel library. When you make updates to it, the automated build should compile it, run tests (yes, it should have tests), update its version number, and publish it. This ensures you have a consistent process, which is important especially when we're talking about deploying versioned packages. Next, you'll want to have a way to share the package between your developers and build machines. One nice thing about Nuget is that any file share can serve as a Nuget server, so at a minimum you can simply drop versioned nupkg files into a particular file share. Alternately, you can use an actual Nuget server, such as one built into Jetbrains TeamCity or VSTS/Azure DevOps. You can use a cloud-based solution like myget, if you prefer. In any case, you simply need a way to distibute your shared, versioned packages.
With these fairly small pieces in place, you should find that you're able to decoupled your shared kernel package from its dependents such that you can make updates to it as required and pull in those updates only as needed by each dependency. You should also find that, being a separate solution with a separate automated build, it's less likely that developers will make cavalier changes to the shared kernel, so it should become more stable by default and should only be updated when truly needed by its downstream dependencies. And of course, you should do whatever you can to minimize the things your shared kernel code depends on, since it's going to be depended on by most of your applications. Keep it lightweight and don't depend on anything from it that you can avoid.
Show Resources and Links
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/weekly-dev-tips-155124/applying-pain-driven-development-to-patterns-8360125"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to applying pain driven development to patterns on goodpods" style="width: 225px" /> </a>
Copy