Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Weekly Dev Tips

Weekly Dev Tips

Steve Smith (@ardalis)

Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. I don't expect every tip to be useful to every developer, but I hope you'll find enough of them valuable to make listening worth your time. Hosted by experienced software architect, trainer, and entrepreneur Steve Smith, also known online as @ardalis. If you find these useful, you may also want to get a free software development tip delivered to your inbox every Wednesday from ardalis.com/tips.
bookmark
Share icon

All episodes

Best episodes

Top 10 Weekly Dev Tips Episodes

Goodpods has curated a list of the 10 best Weekly Dev Tips episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Weekly Dev Tips for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Weekly Dev Tips episode by adding your comments to the episode page.

Weekly Dev Tips - Introducing SOLID Principles
play

05/06/19 • 7 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 47, in which we'll introduce the SOLID principles. I'll spend a little time reviewing these principles in the upcoming episodes.

What are the SOLID principles of object-oriented design?

Sponsor - devBetter Group Career Coaching for Developers

Are you a software developer looking to advance in your career more quickly? Would you find a mentor and a group of like-minded professionals valuable? If so, check out devBetter.com and read the testimonials at the bottom of the page. Sign up for a risk free membership if you're interested in growing your network and skills with us.

Show Notes / Transcript

Depending on how long you've been programming, you may have heard of the SOLID principles. These are a set of 5 principles that have been around for several decades, and about 15 years ago someone - I think it was Michael Feathers - had the idea to arrange them in such a way that they formed the macronym SOLID. Prior to that, I think the first time they were all published together was in Robert C. Martin's 2003 book, Agile Software Development: Principles, Patterns, and Practices in which their sequence spelled SOLDI - so close! This same sequence was used in the 2006 book Agile Principles, Patterns, and Practices in C#.

So what are the SOLID principles? As I mentioned, SOLID is a macronym, meaning it is an acronym formed by other acronyms. In this case, these are SRP, OCP, LSP, ISP, and DIP. All those Ps at the end of each acronym stand for principle, of course. Listing each principle, we have:

  • Single Responsibility
  • Open/Closed
  • Liskov Substitution
  • Interface Segregation
  • Dependency Inversion

You may already be familiar with these principles. If you're a developer who's using a strongly typed language like C# or Java, you should be extremely familiar with them. If you're not, I recommend digging into them more deeply. Applying them can make a massive difference in the quality of code you write. How do I define quality? Well, that's probably a topic I could devote an episode to, but the short version is that quality code is code that is easy to understand and easy to change to suit new requirements. It's easily and quickly tested by automated tests, which reduces the need for expensive manual testing. And it's loosely coupled to infrastructure concerns like databases or files.

How do these principles help you to write quality code? They provide guidance. You need to write code that solves a problem, first and foremost. But once you have code that does that, before you call it done and check it in, you should evaluate its design and see if it makes sense to spend a few moments cleaning anything up. Back in Episode 6 - you are listening to these in sequential, not reverse, order, right? - I talked about Kent Beck's approach of Make It Work, Make It Right, Make It Fast. SOLID principles should generally be applied during the Make It Right step. Don't apply them up front, but as I discussed in Episode 10, follow Pain Driven Development. If you try to apply every principle to every part of your codebase from the start, you'll end up with extremely abstract code that could do anything but actually does nothing. Don't do that.

Instead, build the code you need to solve the problem at hand, and then evaluate whether that code has any major code smells like I discussed in episode 30. One huge code smell is code that is hard to unit test, meaning it's hard to write an automated test that can just test your code, without any external infrastructure or dependencies like databases, files, or web servers. Code that is easy to unit test is generally easy to change, and code that has tests is also easier to refactor because when you're done you'll have some degree of confidence that you haven't broken anything.

In upcoming episodes, I'll drill into each principle a bit more. I've published two courses on SOLID at Pluralsight where you can obviously learn a lot more and see real code as opposed to just hearing me through a podcast. The first one was published in 2010 and so the tools and look were a bit dated. The more recent one is slimmed down and uses the latest version of Visual Studio and .NET Core. There are links to both courses in the show notes - the original one also covers the Don't Repeat Yourself principle. Let me wrap th...

bookmark
plus icon
share episode
Weekly Dev Tips - On Learning TDD and LISP with Uncle Bob Martin
play

03/04/19 • 8 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 42 - the answer to life, the universe, and everything - with some guest tips on learning TDD and Lisp.

Learning TDD and Lisp

This week we have a special guest. He is the author of the books Clean Code, The Clean Coder, and Clean Architecture, all of which I think should be required reading for professional software developers. Robert C. Martin, aka "Uncle Bob", is here to share a couple of tips for software developers. You'll find him online at @unclebobmartin on twitter and at cleancoder.com. We'll jump right into his tip after this quick word from this week's sponsor.

Sponsor - devBetter Group Career Coaching for Developers

Are you a software developer looking to advance in your career more quickly? Would you find a mentor and a group of like-minded professionals valuable? If so, check out devBetter.com and read the testimonials at the bottom of the page. Sign up for a risk free membership if you're interested in growing your network and skills with us.

Show Notes / Transcript

Ok, so without further ado, please welcome Uncle Bob to WeeklyDevTips!

...

Thanks so much, Bob! Structure and Interpretation of Computer Programs was actually the first computer science text book I had in college, in a class that used another Lisp variant, Scheme. I've added links to the resources you mentioned to the show notes, which you'll find in your podcast client or at weeklydevtips.com/042.

Show Resources and Links

That’s it for this week. If you want to hear more from me, go to ardalis.com/tips to sign up for a free tip in your inbox every Wednesday. I'm also streaming programming topics on twitch.tv/ardalis most Fridays at noon Eastern Time. Thank you for subscribing to Weekly Dev Tips, and I'll see you next week with another great developer tip.

bookmark
plus icon
share episode
Weekly Dev Tips - If It Hurts, Do It More Often
play

02/18/19 • 8 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 40, in which I'll talk about the paradoxical advice, "if it hurts, do it more often."

If you’re enjoying these tips, please subscribe in your app. You can leave a rating and better yet, a comment in your app, too. I also accept subscriptions to @WeeklyDevTips on twitter and comments and requests for topics there or in the show comments, too. Thanks for all of your support!

If It Hurts, Do It More Often

I've meant to do an episode or article on this topic for a while. It's advice I've been giving to my mentoring and corporate clients for years. Let's dive in after a quick plug for this show's sponsor, devBetter.

Sponsor - devBetter Group Career Coaching for Developers

If you're not advancing as quickly in your career as you'd like, you may find value in joining a semi-formal career and technical coaching program like devBetter.com. devBetter is a small group of motivated developers meeting every week or two, and staying connected in the meantime via a private Slack community. I answer questions, review code, suggest areas in which to improve, and occasionally assign homework. Interested? Learn more at devBetter.com.

Show Notes / Transcript

I've given the advice "if it hurts, do it more often" for years, but in preparing for this episode I did some research on the phrase to see where I might have picked it up. I found a few articles, including a nice one from Martin Fowler, which I've linked from the show notes. I'll describe my own thoughts and how I usually present the concept, and then add in some of the interesting elements Fowler and others expand upon.

"If it hurts, do it more often." On its face this phrase makes no sense. Putting your hand on a hot stove hurts... so, should you do that more often? Of course not. The advice applies to business and software processes, and the implied context is that whatever "it" is, it's something that you need to do as part of your process. You'll find that a list of painful-but-necessary activities involved in shipping working software includes almost every step of building software. Compiling. Integrating. Deploying. Installing. Debugging. Testing. Pretty much all of these activities are much more difficult and painful if you try and do them rarely compared to if you do them all the time.

So, if you find yourself looking at your process and making decisions in order to minimize how often you perform some necessary part of your process because it's painful, I'm going to go the other way and say do it MORE OFTEN, not less. There's a scene in the Tom Clancy story Clear and Present Danger in which Jack Ryan is in a briefing with the President, who is having to deal with some scandal involving a friend of his. The President's team are advising him to distance himself from his friend, but Jack speaks up and advises just the opposite. Instead of distancing, go the other way. If the press asks if you're friends, tell them you're LIFELONG friends. Don't give them anywhere to go with it. Everyone is aghast at this advice, but of course the president takes the advice and presumably it works out well for him. I feel just like Jack Ryan when I'm giving the counterintuitive advice of doing things more frequently despite how painful they are. It's only natural to minimize pain, and the obvious approach is avoidance. But this just increases how much pain there is when the task must, eventually be done.

When you force yourself to perform part of your process more frequently, the pain decreases dramatically. There are several reasons for this. The tasks becomes more familiar, you gain proficiency, you haven't forgotten what you did last time, and there's been less time to add scope and complexity between steps. All of these natural effects of putting less time between repetitions of the task result in less pain. There are also steps that your team will almost certainly take to reduce pain, like automation. If you have a painful task you do twice a year, it's almost certainly not worth automating. The effort involved in automation will only be recovered a couple of times per year. But if you are performing that same task every month, every week, or every day it very quickly starts to make sense to automate the parts of the process that you can. And once it's automated, the pain drops dramatically.

A client I work with used to have very painful deployments. They would only deploy every month or two, and doing so was always a big source of pain. Many team members would come into the office at 4am on a weekday to get ready for the deployment. The goal was to complete the deployment before customers came into the office that day. There were a lot of QA and dev team heroics. Most of the time, he deployment wouldn't be 100% successful, and often 2-3 or more additi...

bookmark
plus icon
share episode
Weekly Dev Tips - How do you get so much done?
play

02/04/19 • 13 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 38, in which I'll offer some personal productivity tips.

If you’re enjoying these tips, please leave a comment or rating in your podcast app, tell a friend about the podcast, or follow us on twitter and retweet our episode announcements. All these things help increase the reach of this podcast, so more people can benefit from these tips.

Getting stuff done

Occasionally I get asked questions like this one that came from a LinkedIn connection. He wrote, "how in the world do you accomplish so much? Would love to know the strategy." I'm flattered of course, but it's not the first time someone's claimed to be impressed by how much I get done, so I thought I'd share a bit about my approach.

Sponsor - devBetter Group Career Coaching for Developers

If you're not advancing as quickly in your career as you'd like, you may find value in joining a semi-formal career and technical coaching program like devBetter.com. I launched devBetter a few months ago and so far we have a small group of motivated developers meeting every week or two. I answer questions, review code, suggest areas in which to improve, and occasionally assign homework. Interested? Learn more at devBetter.com.

Show Notes / Transcript

So, "how do I get so much done?" Let me start out by saying that I try to be pretty modest. I don't have superpowers. I'm not Bill Gates or Elon Musk, either, with billions of dollars. I don't even have online following of Scott Hanselman or Robert Martin or dozens of others. But I do alright, and I'm willing to share how that is a bit here.

First, I made a realization years ago that every day I have 24 hours to utilize. No more and no less (except twice a year because of stupid daylight savings time). I used to say "I don't have time" for this or that. I'm sure I still say that sometimes, but at least in my head I try to remember that what I actually mean is "I choose not to make time" for it. It may be that you're in a position where you literally do not have control over your time, such as if you're in the military or in prison for example. But unless someone is directly controlling your freedom to choose how to spend your time, your use of time is a choice. Embrace that.

Next, decide where your priorities are. What do you want out of your life? What does success look like to you. If you're a gamer, you can approach life like a strategy game. What's your strategy? Are you trying to max out income? Optimize for the best possible family? Slide through with as few commitments as possible? For me I'd say I'm following the fairly common strategy of trying to maximize my family's well-being while achieving success in my career. Within that strategy I'm focusing on entpreneurship and maximizing how many others I can help, as opposed to trying to climb as high up a corporate ladder as possible. Not having a strategy just means you're letting someone else choose your moves. Figure out what your strategy is, then figure out if the moves you're making - i.e. the way you're spending your time - is in line with what you think your strategy is. Remember, "How we spend our days is how we spend our lives." (Annie Dillard). Be sure you're spending your time wisely - it's the most precious resource you have.

Ok, so that's the high level strategy side of the equation. At the tactical level, there are a few things I do that probably at least make it look like I'm being super productive. First, I minimize my commute. In the past I've had commutes of as long as an hour into work in some city where I then had the privilege of paying an obscene amount of money to park my car every day. Now, I can work from home if I choose or I have about a 10 minute country road drive to my office, which is also just a few minutes from my kids' school so it's often convenient when dropping off or picking up kids (there's no bus so driving them is one of those things my wife and I "get to" make time to do most weekdays). Not having that commute adds up. If I'm spending 10 minutes instead of 60 minutes twice a day driving, that's 100 minutes per day of bonus productivity. Think about that for a few minutes. Now, if we get self-driving cars maybe that commute time can be used productively (or if you're lucky enough to have decent public transportation). But until then I optimize for minimal time wasted on commuting.

Another thing I do is minimize time spent on TV. I watch some, but pretty much only with family members as we enjoy time together, or occasionally when working out. I'm not perfect on this front, and recently I've been spending more time than I used to on video games which can suck up at least as much time as binging Netflix, but the idea is to be mindful of how much time you're spending o...

bookmark
plus icon
share episode
Weekly Dev Tips - Debugging Tips

Debugging Tips

Weekly Dev Tips

play

01/28/19 • 5 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 37, in which I'll talk a bit about how I debug problems I find in my code.

If you’re enjoying these tips, please leave a comment or rating in your podcast app, tell a friend about the podcast, or follow us on twitter and retweet our episode announcements. All these things help increase the reach of this podcast, so more people can benefit from these tips.

Debugging Tip

This week's tip is by request via twitter from Bernard FitzGerald (@bernimfitz) who wrote "How about an episode devoted to effective debugging? I think that would be interesting to hear your methodology of tracking down a bug." Well, Bernard, this bug's for you. Sorry, lame beer commercial joke. On that note, here's a commercial...

Sponsor - devBetter Group Career Coaching for Developers

If you're not advancing as quickly in your career as you'd like, you may find value in joining a semi-formal career and technical coaching program like devBetter.com. I launched devBetter a few months ago and so far we have a small group of motivated developers meeting every week or two. I answer questions, review code, suggest areas in which to improve, and occasionally assign homework. Interested? Learn more at devBetter.com.

Show Notes / Transcript

Let's talk a bit about debugging. Let me start off with a couple of personal observations. First, I think debuggers are amazing. Having the ability to magically stop time in the middle of your application anywhere you want and see exactly what the state of everything is there is like a super power. It far outstrips using console output and checking a log file or terminal window for logged output like "got here" and "got here 2". Those were dark days.

And second, despite how amazing they are, I almost never use the debugger. One tip I give all the time to students in my workshops is that they learn to use ctrl-F5 instead of F5 to launch their applications because it's so much faster. In my experience, 90% or more of the time you're not actually debugging when you launch your application, and in a recent experiment I ran it took about a second to launch an ASP.NET Core app without the debugger and about 10 seconds to do so with it (running on my somewhat old laptop). Those seconds add up, especially when you remember that after a few seconds you're likely to get distracted and go look at your phone or open a browser and start checking email or twitter or something. Not using the debugger helps keep you in the zone and productive.

So why not use the debugger to, like, actually debug problems? I do sometimes. But more often I'll write tests. If it's my own application, I probably already have a bunch of tests. If there's some weird behavior going on and no existing test is catching it, I'll try to write a new one that fails because of the bug I'm looking for. Going through this exercise forces me to analyze what the program is doing, what classes are collaborating and how, and in general to have a better understanding of what's going on.

If I can't easily write a test to isolate the issue I'm having, then I'll use the debugger. I might even debug from a test, since that's often an easy way to jump right to a particular place in my code that I know is being called with known inputs. From there I'll look at the values of all the relevant variables and arguments and usually that will identify where something isn't set the way I'd thought or assumed it was.

Another approach I take is to use some kind of diagnostic tool within the app framework I'm using to provide me with as much data as possible about how the system is working. That might be using a tool like ELMAH for older ASP.NET apps, or an MVC route debugger middleware that shows me every route and how it's configured. I have some middleware for ASP.NET Core on GitHub that will format and render all of the services the application has registered. Things like this can often help provide additional context and information that can eventually help find the source of a problem.

Tests aren't the only thing that helps avoid the need for a debugger. Using custom exceptions like I described in episode 7 helps make it obvious what went wrong so you don't need to debug in order to figure out that NullReferenceException. Writing short, simple methods with low complexity, perhaps with the help of Guard Clauses that I described in episode 4 is helpful, too. I actually revisited both of these topics in the previous episode, too. When your code is kept simple and small, problems are generally easily detected. If you're writing 1000 line long methods that require multiple levels of nested regions in them to be comprehensible, I can see how you might need the debugger to sort out what the heck is going on when something doesn't...

bookmark
plus icon
share episode
Weekly Dev Tips - On Code Smells

On Code Smells

Weekly Dev Tips

play

10/15/18 • 3 min

I've talked quite a bit about code smells over the course of my career. My Refactoring Fundamentals and Azure Refactoring courses on Pluralsight both discuss the topic, with the former going into great depth and covering literally dozens of code smells. The course is over 8 hours long, but it not only demonstrates tons of code smells but also shows how to refactor to improve your code in response to them.

It's important to note that code smells represent things in your code that are potentially bad. They should catch your attention, and you should think about whether, in context, the smell in question is acceptable or not. Sometimes, it's perfectly fine, or it's not worth the effort to refactor to a different design.

If you've never heard of the term code smell, I encourage you to look into it. There are some links in the show notes for this episode. One benefit of learning about code smells mirrors a benefit of learning about design patterns, which is that these named smells allow you to identify and communicate concepts quickly with other developers. For example, if you're discussing some code and mention it seems to have 'primitive obsession', that term refers to a specific code smell which is well-documented and which has certain known refactoring approaches. By using this term, you convey a lot of information in just two words that otherwise might have required a great deal more explanation.

It can be useful as well to learn about different categories of code smells. These categories include things like Bloaters, Obfuscators, and Couplers, as well as smells specific to kinds of code, like testing smells. These categories help as you're learning about code smells because they let you see a variety of smells that all have similar impacts on the code. Bloaters tend to result in code becoming larger than necessary. Couplers introduce unnecessary coupling into the application. Obfuscators make it more difficult to quickly understand how some part of your application works. And test smells make tests more difficult to write and maintain, or less reliable when run.

Some code smells you can identify with static code analysis tools, like NDepend. For instance, you can easily write a query in NDepend to return all methods over a certain number of lines of code. These kinds of tools can help you identify potential problem areas in your code so you can better direct your refactoring efforts.

I may dive into some different code smells, and how to correct them, in future tips. In the meantime, if you want to get up to speed the best resource I can recommend is my Refactoring Fundamentals course, on Pluralsight.

Show Resources and Links

bookmark
plus icon
share episode
Weekly Dev Tips - Applying Pain Driven Development to Patterns
play

10/01/18 • 8 min

Applying Pain Driven Development to Patterns

This week we talk about specific ways you can apply my strategy of Pain Driven Development to the use of design patterns. This is an excerpt from my Design Pattern Mastery presentation that goes into more detail on design patterns.

Sponsor - DevIQ

Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.

Show Notes / Transcript

I talked about Pain Driven Development, or PDD, in episode 10 - check out that episode first if you're not familiar with the practice. I've recently been focusing a bit on some design patterns. An easy trap to fall into with design patterns is trying to apply them too frequently or too soon. PDD suggests waiting to experience pain while trying to work with the application's current design before you attempt to refactor to improve its design by applying a design pattern. In this tip, I'll walk through a few common steps where applying a specific pattern may be helpful.

To begin, let's assume we have a very simple web application. Let's say it's using MVC, and there's a controller that needs to be used to return some data fetched from a database. It could be an API endpoint or a view-based page - the UI format isn't important in this case. The absolute simplest thing you can do in this situation is hard code your data access code into your controller. So, assuming you're using ASP.NET Core and Entity Framework Core, you could instantiate an DbContext in the controller and use that to fetch the data. This works and meets the immediate requirement, so you ship this version.

A little bit later, your application has grown more complex. You have some filters that also use data, along with other services. You start to notice occasional bugs from EF and realize that you've introduced a bug. By instantiating a new DbContext in each controller, but occasionally passing around entities between parts of the application, EF gets in a state where entities are tracked by one instance but you're trying to operate on them with another instance of DbContext. You need to use a single EF Core DbContext per web request, which is to say it should have a "Scoped" lifetime. Fortunately, ASP.NET Core makes it very easy to achieve this by configuring your DbContext inside of ConfigureServices. In fact, if you don't read the docs, you probably don't even know what lifetime EF Core is using, because it's hidden within an extension method. In any case, once you configure DbContext in ConfigureServices, you need a way to get it into your Controller(s). To do this requires the Strategy pattern, covered in episode 19. If you're familiar with dependency injection, you've used the Strategy pattern. Add a constructor to your Controller, pass in the DbContext, and set a private local field with the value passed into the constructor. Do this anywhere you're otherwise newing up the DbContext. Remind yourself 'new is glue'. You just fixed an issue with too tight of coupling to the instantiation process by using the service collection built into ASP.NET Core, an IOC container, essentially a factory on steroids. Your EF Core lifetime bug is now fixed, so you ship the code.

Some more time passes, the application has grown, and now there are a bunch of controllers and other places that all have DbContext injected into them. You've noticed some duplication in how code works with the DbContext. You've also found that it's tough to unit test your classes that have a real DbContext injected, except by configuring EF Core to use its In Memory data store. This works, but you'd prefer it if your unit tests truly had no dependencies so you could just test behavior, not low-level data access libraries. You decide that you can solve both of these problems by introducing the Repository pattern, which is just a fancy name for an abstraction used to encapsulate the low level details of your data access. You create a few such interfaces, implement them with DbContext, and make sure your Controllers and other classes that were directly using DbContext now have an interface injected instead. Along the way you fix a couple of bugs you discovered that had grown due to duplicate code that had evolved differently, but which should have remained consistent. When you're done, the only types that know about DbContext directly are your concrete Repository implementations.

Your application is growing more popular now, and some of the pages are really hammering the database. Their data doesn't change very often, so you decide to add some caching. Initially you start putting the caching logic directly in your data access code in your repository implementations that use EF Core, but you quickly find that there is a lot of ...

bookmark
plus icon
share episode
Weekly Dev Tips - Do I Need a Repository?
play

06/11/18 • 7 min

Do I Need a Repository?

This week we'll answer this extremely common question about the Repository pattern, and when you should think about using it.

Sponsor - DevIQ

Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.

Show Notes / Transcript

This week we're going to return to the Repository design pattern to answer a very common question: when should you use it? This question appears very frequently in discussions about Entity Framework or EF Core, usually with someone saying "Since EF already acts like a repository, why would you create your own repository pattern on top of it?"

Before we get into the answer to this question, though, let me point out that if you're interested in the repository pattern in general I have a link to a very useful EF Core implementation in the show notes for this episode that should help get you started or perhaps give you some ideas you can use with your existing implementation. Also, just a reminder that we talked about the pattern in episode 18 on query logic encapsulation, but otherwise I haven't spent a lot of time on repository tips here, yet.

Ok, so on to this week's topic. Should you bother using the repository pattern when you're working with EF or EF Core, since these already act like a repository? If you Google for this, you're likely to discover an article discussing this topic that suggests repository isn't useful. In setting the scene, the author discusses an app he inherited that had performance issues caused by lazy loading, which he says "was needed because the application used the repository/unit of work pattern."

Before going further, let's point out two things. One, lazy loading in web applications is evil. Just don't do it except maybe for internal apps that have very few users and very small data sets. Read my article on why, linked from the show notes. Second, no, you don't need lazy loading if you're using repository. You just need to know how to pass query and loading information into the repository.

The author later goes on to say "one of the ideas behind repository is that you might replace EF Core with another database access library but my view it's a misconception because a) it's very hard to replace a database access library, and b) are you really going to?" I agree that it's very hard to replace your data access library, unless you put it behind a good abstraction. As to whether you're going to, that's a tougher one to answer. I've personally seen organizations change data access between raw ADO.NET, Enterprise Application Block, Typed Datasets, LINQ-to-SQL, LLBLGen, NHibernate, EF, and EF Core. I've probably forgotten a couple. Oh yeah, and Dapper and other "micro-ORMs", too. If you're using an abstraction layer, you can swap out these implementation details quickly and easily. You just write a new class that is essentially an adapter of your repository to that particular tool. If you're hardcoded to any one of them, it's going to be a much bigger job (and so, yeah, you're less likely to do it because of the pain involved.)

Next, the author lists some of the bad parts of using repository. First, sorting and filtering, because a particular implementation he found from 2013 only returned an IEnumerable and didn't provide a way to allow filtering and sorting to be done in the database. Yes, poor implementations of a pattern can result in poor performance. Don't do that if performance is important. Next, he hits on lazy loading again. Ironically, at the time this article was published, EF Core didn't even support lazy loading, so this couldn't be a problem with it. Unfortunately, now it does, but as I mentioned, you shouldn't use it in web apps anyway. It has nothing to do with repository, despite the author thinking they're linked somehow. His third perf-related issue is with updates, claiming that a repository around EF Core would require saving every property, not just those that have changed. This is also untrue. You can use EF Core's change tracking capability with and through a repository just fine.

His fourth and final "bad part" of repositories when used with EF Core is that they're too generic. You can write one generic repository and then use that or subtype from it. He notes that it should minimize the code you need to write, but in his experience as things grow in complexity you end up writing more and more code in the individual repositories. Having less code to write and maintain really is a good thing. The issue with complexity resulting in more and more code in repositories is a symptom of not using another...

bookmark
plus icon
share episode
Weekly Dev Tips - Domain Events - After Persistence
play

06/04/18 • 5 min

Domain Events - After Persistence

The previous tip talked about domain events that fire before persistence. This week we'll look at another kind of domain event that should typically only fire after persistence.

Sponsor - DevIQ

Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.

Show Notes / Transcript

If you're new to the domain events pattern, I recommend you listen to episode 22 before this one. In general, I recommend listening to this podcast in order, but I can't force that on you...

When you have a scenario in your application where a requirement is phrased "when X happens, then Y should happen," that's often an indication that using a domain event might be appropriate. If the follow-on behavior has side effects that extend beyond your application, that's often an indication that the event shouldn't occur unless persistence is successful. Let's consider a contrived real-world example.

Imagine you have a simple ecommerce application. People can browse products, add them to a virtual cart or basket, and checkout by providing payment and shipping details. Everything is working fine when you get a new requirement: when the customer checks out, they should get an email confirming their purchase. Sounds like a good candidate for a domain event, right? Ok, so your first pass at this is to simply go into the Checkout method and raise a CartCheckedOut event, which you then handle with a NotifyCustomerOnCheckoutHandler class. You're using a simple in-proc approach to domain events so when you raise an event, all handlers fire immediately before execution resumes. You roll out the change with the next deployment. Unfortunately, another change to the codebase resulted in an undetected error related to saving new orders. Meaning, they're not being saved in the database. Now the result is that customers are checking out, being redirected to a friendly error screen, but also getting an email now confirming their order was placed. They're mostly assuming everything is fine on account of the pleasant email confirmation, but in fact your system has no record of the order they just placed because it didn't save. In this kind of situation, you'd really rather not send that confirmation email until you've successfully saved the new order.

While in-proc domain events are often implemented using simple static method calls to raise or register for events, post-persistence events need to be stored somewhere and only dispatched once persistence has been successful. One approach you can use for this in .NET applications is to store the events in a collection on the entity or aggregate root, and then override the behavior of the Entity Framework DbContext so that it dispatches these events once it has successfully saved the entity or aggregate. My CleanArchitecture sample on GitHub demonstrates how to put this approach into action using a technique Jimmy Bogard wrote about a few years ago. It involves overriding the SaveChanges method on the DbContext, finding all tracked entities with events in their collection, and then dispatching these events. His original approach fires the events before actually saving the entity, but I much prefer persisting first and using a different kind of domain event for immediate, no side effect events.

In the Clean Architecture sample, I have a simple ToDo entity that raises an event when it is marked complete. This event is only fired once the entity's state is saved. At that point, a handler tasked with notifying anybody subscribed to that entity's status could safely send out notifications. The pattern is very effective as a lightweight way to decouple follow-on behavior from actions that trigger them within the domain model, and it doesn't require adding additional architecture in the form of message queues or buses to achieve it.

Would your team or application benefit from an application assessment, highlighting potential problem areas and identifying a path toward better maintainability? Contact me at ardalis.com and let's see how I can help.

Show Resources and Links

bookmark
plus icon
share episode
Weekly Dev Tips - Dependency Inversion Principle
play

09/16/19 • 6 min

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis.

This is episode 57, on the Dependency Inversion principle.

Dependency Inversion Principle

This week's tip is brought to you by devBetter.com.

Sponsor - devBetter Group Career Coaching for Developers

Need to level up your career? Looking for a mentor or a the support of some motivated, tech-savvy peers? devBetter is a group coaching program I started last year. We meet for weekly group Q&A sessions and have an ongoing private Slack channel the rest of the week. I offer advice, networking opportunities, coding exercises, marketing and branding tips, and occasional assignments to help members improve. Interested? Check it out at devBetter.com.

Show Notes / Transcript

Ok, now we've reached the last and in my opinion the most important of the SOLID principles, D for Dependency Inversion. The Dependency Inversion Principle, or DIP for short, has a longer definition that most of the other principles and is often conflated with the related coding technique, dependency inversion, or DI. The principle states that High-level modules should not depend on low-level modules. Both should depend on abstractions (interfaces or abstract types). and further, Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.

Let's look quickly at each of these two parts. The first part talks about high level and low level modules. The "level" of a module has to do with how near or far it is from some kind of I/O device. That could be the user interface or it could be a local file or a database server. Low level modules deal directly with these kinds of I/O devices or destinations. High level modules do not know about or deal with specific kinds of I/O. These are things like business logic classes and behavior that model how a system works. In many systems that don't use abstractions, high level modules depend on low level modules, or the high level logic is mixed in with low level concerns in the same modules. Both of these approaches violate the Dependency Inversion Principle. Instead, these modules should communicate with one another using abstractions like C# or Java interfaces. Both kinds of modules would depend on a common interface, typically with the low level module implementing the interface and the high level module calling it.

The second part suggests that abstractions - interfaces typically - should not depend on details. So an example of this would be if you had an interface for fetching information about a customer. One approach would be to write the interface so that it returned a SqlDataReader as its return type, where the data reader had the customer info. This exposes the details of how the data is stored, since you would only use a SqlDataReader to fetch the data from a SQL database. One benefit of following the Dependency Inversion principle is modularity. You could change that interface to return a simple List type and that List could come from any number of storage locations, from databases, to files to in-memory stores or web APIs. So, that covers how abstractions should not depend on details - what about the last bit that says details should depend on abstractions? That's talking about your low-level modules that actually communicate with I/O. These should depend on your interfaces by implementing them.

If you're build a system composed of multiple projects it can be extremely difficult to follow the Dependency Inversion principle if you don't structure your project dependencies appropriately. This means ensuring that your abstractions - your interfaces - live in a project alongside your business model entities and that your implementation details live in another project that references this one. I have a GitHub repository and solution template called Clean Architecture that you can use as a starting point for new ASP.NET Core applications that need to follow SOLID principles and use clean architecture. You'll find a link to it in the show notes or just google ardalis clean architecture.

A key benefit of Clean Architecture that is enabled by following the Dependency Inversion Principle is that your business model has no dependencies on external infrastructure concerns. These dependencies are a huge part of why legacy codebases are often difficult or impossible to write unit tests for. By keeping these dependencies separate and in their own project that other projects do not depend upon, it makes it much easier to unit test the most important part of your application: its business domain model. I talk more about this in my ...

bookmark
plus icon
share episode