Monday, June 24, 2013
Here's the problem: If, at the point that your view model and your post model happen to be structurally identical, you decide that it's "overkill" or "duplication" or "unnecessary complexity", and just use a single model, then you make those two roles completely implicit, and model a scenario that is only circumstantially true. A less experienced developer will come along, and absent the experience to know that there are two roles in one place, evolve the single object in-place rather than split the roles out. They don't know any better, and you have hidden from them the information that would indicate to them the better way forward.
In short a data object is incomplete without its context. Its responsibility, in terms of the Single Responsibility Principle, is naturally coupled from its context. This isn't a bad thing, as long as you recognize and respect that a data object has no standalone identity.
So how do you respect this symbiotic state of affairs? In a strongly-typed language it's very simple: by giving it its own, layer-specific type/class. This way it will become blatantly obvious when a data object has gotten too far from home and is in danger of being misused or misinterpreted.
In a loosely-typed language, you respect the coupling by cloning or mapping a data object as it moves between layers. Since there's no type system to stop you, it's extremely easy to just pass the same thing around until a clear requirement of change appears. But the very fact of the data object's implicit role means that the developer will have precious few clues to this. By always cloning or mapping into a new object the potential damage is mitigated, and the developer gets a clue that something is being protected from external modification. Hopefully this distinction of identity encourages the developer to make even small changes to structure and naming that ensure that the object is always expressive and clear of purpose relating to the context of its usage.
Recognizing this is a nuanced design skill, and it's one that people often resist in order to avoid adding more types to their programs. But it doesn't take long to reach a level of complexity where the benefits of this care and attention to expressiveness and signalling become clear.
Tuesday, January 17, 2012
1. A Plain Old CLR Object that does not derive from a base class, or implement an interface, which is defined as an integration point for a framework.
Infrastructure frameworks often brag these days about being POCO-compatible. This is in some ways a victory, because it used to be quite the opposite. In order to use an ORM, or an MVC framework, or really anything involving a "model", you needed to inherit some framework base class before it would recognize and respect the role of these objects in your application. But we're past that now. Today you can use POCOs even with frameworks that come directly from Microsoft, like the Entity Framework.
"Convention over configuration" is a similar principle of framework and infrastructure design that has been lauded for a few years now, and is finally hitting more mainstream shops. It trades on the ability to reduce the number of design decision points, allowing you to write less code, and do it faster. In other words, why take all the time and effort to deal with a complicated configuration mechanism capable of expressing myriad options, and then a complicated engine capable of processing all those options, when any one project is only going to use a slice of it all. With convention-based frameworks, just follow some simple rules, and let the framework handle all the details.
For a long time, I was really happy about this. But today I realized that for all the benefits of simplification and boilerplate elimination, there are some pretty serious tradeoffs being made. Yes, the benefits can be absolutely huge. The gains in consistency, speed, reliability, and flexibility can't be overstated. It saves you from thinking and worrying about a lot of things that are fairly arbitrary in the context of the larger business problem.
Yet for all my appreciation, some shine is starting to come off this penny for one big reason I like to call "context-smearing". Both POCOs and conventions gain their structural homogeneity and rapid development by diluting object responsibilities, and trading away discoverability and expressive design. They take contextual info that ought to be localized, and smear it out between layers, across multiple or even many different locations in code. I am becoming increasingly careful about making this trade-off. I have seen the price it exacts, and I think there are definitely cases where that cost is probably not worth the gain.
So the big problem with both POCOs and conventions is a loss of context. What's so bad about that? In a word: coupling. Regardless of the absence of tangible framework attachments like base classes and interfaces, your POCOs and conventional objects are still tightly coupled to the framework. But instead of looking at the object and seeing the coupling, it is made almost completely invisible.
When you write a POCO that a framework needs to process, it needs to be simple. Anything too complicated and the framework won't understand it or will make an assumption that doesn't hold, and then the whole thing comes crashing down. Under this pressure your POCOs get dumber and dumber. Unless you're very diligent, before long they are not much more than simple property bags. And a bag of properties is no model. It's just a bunch of data.
With the behavior driven out, there's no context with which to understand the meaning of the data, the relationships, and the references. There's no indication why this property is reproduced over there, or why that container seems to exist solely to hold a single other object. Instead, this knowledge is encoded into all the places where the objects are used. It is distributed and diffused throughout the application, so that in no one place--or even handful of places--can you look to see the true meaning of what that object's role is.
The consequence of this is that points of consumption become fountains of boilerplate and redundancy in your code. At least it's consistent, though. So naturally you'll turn to conventions in order to scrub away some of this homogeneous noise. A good convention-based framework can look at your model, look at your usage, and determine all it needs to know to glue them together. This really streamlines things. But soon enough you've compounded the problem, because conventions actually encourage the exact same diffusion of context. Only conventions grind context down to an even finer sprinkling of dust, and then scatter it on the wind.
It should be clear enough why this is dangerous. It's very easy to slip from here into what John Cook calls "holographic code", which he describes far more elegantly than I have here.
But take the sales pitch with a grain of salt. Conventions will make some of your code simpler, but they can also easily obscure the problem and the solution--or even just the actual model--behind the patterns mandated by the mechanism. An experienced "framework X developer" can see right past that noise. But should we really need to be framework experts just to understand the business problems and solutions? Solid and inviting framework documentation can go a long way in helping people see the signal in all the noise.
Likewise, POCOs are tremendously light-weight, quick, and flexibile. This commonly comes at a cost of design confusion or opacity of purpose when seen separated from their points of consumption. Establishing context is crucial in avoiding confusion and misuse. POCOs should be small and focused, not general and reusable. And they should be located as close as possible to the consumers.
While the problem could get very bad if you're not careful, it's rarely inevitable. And the speed you gain for the 80% case can often be invaluable. You just need to be aware of the pathologies in play, and take steps to mitigate them. If you're lucky, wise decisions on the part of the framework designers can reduce the number of compromises you have to make. But even more important--because it's actually under your control--wise decisions on the part of your development team can contextualize the anemic and opaque bits, limiting their reach.
My advice is to be aware of the problem, and tackle it head-on. Whether by careful coding or clear documentation, be diligent in providing context where the frameworks will tend to suck it away.
Saturday, September 17, 2011
Ian Griffiths has written a very deep and informative post about what WinRT really is, and what is meant by all this "projection" vs "native" business. It's a great read, and you should dig in. Especially if you're an old COM hand, like myself.
I have seen a lot of important questions floating around lately (most notably from @kellabyte) that this post answers. But it takes a long time and many diversions before it gets there. So for those who don't have the time or the background to wade through Ian's post, I humbly present a TL;DR bullet list summary.
- WinRT appears to be the new Windows API. It is probably the next evolution of Win32.
- WinRT is native, and just as fast as native has always been.
- WinRT is rooted in the next evolution of COM, not .NET
- WinRT involves just as much indirection (and "inefficiency", compared to COM-less native code) as COM always has.
- WinRT "projections" are language-specific. For C++, they're native.
- WinRT projections do no more and no less than smooth over much of the API for consuming languages, putting a curtain around the pain and noise of working with COM and HRESULTs directly.
- WinRT projections are not the next .NET windowing framework. They do not replace WinForms, WPF, Silverlight.
I'll also add to this list my own prediction: We will probably not want to directly consume WinRT any more than we wanted to directly consume Win32. There will be value in a GUI framework to abstract that away, just as there always has been. I haven't heard any mention of the Metro-based successor to WPF/Silverlight, but I would expect one to be delivered with the next version of VS and .NET. I look forward to hearing more from DevDiv now that all the WinRT details are starting to trickle out.
Monday, September 12, 2011
The "big four" frameworks in this space are AngularJS, KnockoutJS, Backbone.js, and Batman.js. All four of these frameworks are primarily concerned with finding ways to purify and simplify two parts of the architecture of most modern applications: the UI (a.k.a. view, in this case, the HTML), and the model that sources the information in the UI. The primary obstacle in the way of achieving this goal is something called data-binding, which is a terse and useful term for synchronization of the dynamic information displayed in your UI to the state of an object model in your application.
AngularJS goes about this in a strikingly different way from the other three frameworks. The developers of Angular elected to use HTML "templating" as the mechanism of data-binding. Which is to say, you take a "template" of your web page, poke holes in it where you want your dynamic information to go, and then place in those holes small bits of syntax which tell the templating engine how to build the HTML to fill the holes at runtime. Below is a simple example of a template, as it would appear in an application built with AngularJS. If it looks familiar, that's because it's my own modified version of the front page demo from the AngularJS website. Be sure to note the fidelity of the template to the HTML it will produce. It's very clean, and you can see clearly the features of the document in it.
Templating is great for producing a document on demand using data that isn't known until runtime. It's a technique that has been around for quite a while now, and we've mostly figured out some really convenient syntax and conventions for doing it. Heck, ASP.NET MVC is a template-based solution for creating HTML on the fly. It works fantastic. But MVC is not dynamic once that document leaves the server. It builds the HTML once and sends it off to the client, never to revisit it again. The next document it serves up will be a whole new document in response to a whole new request. This is very different from the way Angular uses templating on the client, where the same document may be changed and re-rendered hundreds of times, or more.
For extremely dynamic sites where most of the elements on the page are subject to change in some way or another and very little can be considered "static", I think more conventional solutions are more appropriate. At that point, it becomes appropriate to move logic as much as possible into code, where it can be encapsulated in objects and layers with clear roles and identities. Here it may be prudent to start considering Backbone, Batman, Knockout, or maybe even something else.
Monday, August 8, 2011
First I want to say thanks to Alex for the feedback on my previous post. I did eventually manage to clean up my design quite a bit, but couldn't really put my finger on why it was so hard. After reading Alex's comment it occured to me that I hadn't really thought very hard on where the layer boundaries are. I had (and maybe still have) my persistence, my service, and my infrastructure layers somewhat intertwined. I'm ashamed to have been bitten by this considering how much of a proponent of intentional design and layering as I've been. But I was on a spike to tackle some unfamiliar territory, as well as fighting with a bit of an uncooperative helper framework, so it seems I bit off more than I could chew.
The problem I was trying to deal with was a multi-threaded service, with some threads producing data, and others consuming it to store to a DB. Between device sessions, threads, DB sessions, and domain transactions, I had a lot of "resources" and "lifetimes" that needed very careful managing. It seemed like the perfect opportunity to test the limits of the pattern I've been fond of lately, of creating objects whose primary responsibility is managing lifetime. All services which operate within the context of such a lifetime must then take an instance of the lifetime object. This is all well and good, until you start confusing the context scoping rules.
Below is a sample of my code. I'd have to post the whole project to really give a picture of the concentric scopes, and that would be at least overwhelming and at worst a violation of my company's IP policies. So this contains just the skeleton of the persistence bits.
The important part of this picture is the following dependency chain (from dependent to dependency): Repository->DataContext->SessionProvider->UnitOfWork. What this would indicate is that before you can get a Repository object, you must have a Unit Of Work first. But it's common in our web apps to basically just make the UoW a singleton, created at the beginning of a request, and disposed at the end. Whereas in a service or desktop app the most obvious/direct implementation path is to have different transactions occurring all over the place, with no such implicit static state to indicate where one ends and the next begins. You get a common workflow that looks like this: Create UoW, save data to Repo, Commit UoW. This doesn't work nicely with a naive IoC setup because the UoW, being a per-dependency object now, is created after the Repo, which already has its own UoW. The changes are in the latter, but it's the former that gets committed. So either the UoW must be created at a higher scope, or the Repo must be created at a lower scope, such as via a separate factory taking a UoW as an argument. I'm not super fond of this because it causes the UoW to sort of vanish from the code where it is most relevant.
One final option is that the dependency chain must change. In my case, I realized that what I really had was an implicit Domain Transaction or Domain Command that wasn't following the pattern of having an object to manage lifetime scope. The fact that this transaction scope was entirely implicit, and its state split across multiple levels of the existing scope hierarchy, was one of the things that was causing me so much confusion. So I solved the issue by setting up my IoC to allow one UoW and one Repo per "lifetime scope" (Autofac's term for child container). Then I created a new domain command object for the operation which would take those two as dependencies. Finally the key: I configured Autofac to spawn a new "lifetime scope" each time one of these domain command objects is created.
Again this illustration is a bit simplified because I was also working with context objects for threads and device sessions as well, all nested in a very particular way to ensure that data flows properly at precise timings. Building disposable scopes upon disposable scopes upon disposable scopes is what got me confused.
I have to say that at this point, I feel like the composability of the model is good, now that I've got layer responsibilities separated. But that it feels like a lot is still too implicit. There is behavior that has moved from being explicit, if complex, code within one object, to being emergent from the composition of multiple objects. I'm not convinced yet that this is an inherently bad thing, but one thing is for certain: it's not going to be covered properly with just unit tests. If this is a pattern I want to continue to follow, I need to get serious about behavioral/integration testing.
Friday, August 5, 2011
I'm noticing a strange thing happening as I make classes more composable and immutable. Especially as I allow object lifetime to carry more meaning, in the form of context or transaction objects. This shift has made object initialization involve a lot of "real" computation, querying, etc. Combined with a desire to avoid explicit factories where possible, to capitalize on the power of my IoC container, this has led to an accumulation of logic in and around constructors.
I'm not sure how I feel about so much logic in constructors. I remember being told, I can't remember when or where, that constructors should not contain complex logic. And while I wouldn't call this complex, it certainly is crucial, core logic, such as querying the database or spinning up threads. It feels like maybe the wrong place for the work to happen, but I can't put my finger on why.
It's important to me to be deliberate, and I do have a definite reason for heading down this path. So I don't want to turn away from it without a definite reason. Plus, pulling the logic out into a factory almost certainly means writing a lot more code, as it steps in between my constructors and the IoC container, requiring manual pass-through of dependencies.
I'm not certain what the alternative is. I like the idea of object lifetimes being meaningful and bestowing context to the things below them in the object graph. Especially if it can be done via child containers and lifetime scoping as supported by the IoC container. But it is really causing me some headaches right now.
My guess at this point is that I am doing too much "asking" in my constructors, and not enough telling. I've done this because to do the telling in a factory would mean I need to explicitly provide some of the constructor parameters. But I don't want to lose the IoC's help for providing the remainder. So I think I need to start figuring out what my IoC can really do as far as currying the constructor parameters not involved in the logic, so that I can have the best of both worlds.
Maybe it will bite me in the end and I'll pull back to a more traditional, less composable strategy. Maybe the headaches are a necessary consequence of my strategy. But I think it's worth trying. At least if I fail I will know exactly why not to do this, and I can move on to wholeheartedly looking for alternatives.
Thoughts, suggestions, criticisms? Condemnations?
Tuesday, May 17, 2011
Lifetime scopes are the primary unit of context in Autofac. When the framework has to resolve a dependency, it doesn't pull it from a container. Rather, it pulls it from a lifetime scope. A lifetime scope is different from a container in that you can't change or query its configuration at runtime. It is a purely resolution-oriented interface. But more importantly, in concert with a few other features, it offers a very convenient way to establish what I call "resolution contexts" within your application.
A resolution context is a "bubble" within your application with some or all of its own resolution rules. You can dictate that within a given bubble, certain objects are guaranteed unique, and will be cleaned up when the context is disposed of. Or you can ensure that dependencies are resolvable inside the bubble, but not outside of it. By nesting bubbles that do this, you can enforce inter-layer dependency rules ensuring, for example, that the domain services layer does not depend on the UI layer. You can see the obvious reason that the word "scope" is used in the name: scoping is their explicit purpose.
The features that combine with lifetime scope to make it a truly powerful idiom are tags, and tagged registration. You can tag a lifetime scope with an object that identifies that scope. This value can also be specified in a given type registration to mark it as being unique within a scope having that tag.
Combined, these features allow you to tie instances of objects to a scope, and ensure that anything resolved within that scope gets a particular object. This can be particularly useful for managing things such as database transactions, or domain units of work. Or really any transaction-like operation consisting of a set of sub-operations involving resources which need only, or should only, exist for the duration of the parent operation.
Without this feature, it falls to the developer to explicitly write an object whose responsibility is to manage the lifetime or reach of these things, and another whose responsibility it is to generate and dispose of these scoping objects. This is essentially simple, but for an inexperienced dev it is a tough problem. For an experienced dev, it's annoyingly repetitive.
The thing I love about Autofac is that it supports tagged scopes which make it trivially easy to solve this problem. The thing I hate about Autofac is that you can't define as part of the registration process where you want these nested lifetime scopes to be generated. Instead, you must use a hook, a direct framework reference, a relationship type or some other extension mechanism, to dictate when nested lifetime scopes are introduced.
The reason I have these conflicting feelings is that Autofac approaches, but falls short of what I have come to consider the ideal role of an IoC container: to be the composer of the application. The use to which most of us put IoC containers today diverges from the implications of the name "container". While they do serve as containers in the sense that they hang onto shared (singleton) instances of objects, they also fill a factory role. And with Autofac's nested lifetime scopes, they can draw boundaries around self-contained operations and object graphs. Windsor prides itself on powerful, flexible "release" mechanisms, for cleaning up objects that are no longer needed. Again, this goes far beyond the role of a "container".
Yet despite this, the configuration API of most IoC frameworks still is highly container-oriented. We "register" types and interfaces, attaching bits of metadata here and there as needed. These are "stored" in the container, which can be queried or overridden at run-time by adding or removing registrations from the container.
But an IoC is at its most effective when it is acting not as a container, but as a rules engine, to regulate when new objects are created, and when existing ones are reused. Or how one object is chosen from many that implement a common interface. Or when and how object graphs are disposed of and resources cleaned up. These roles all build toward an overarching theme of "composing" the application. That is to say, building it up out of different blocks of functionality, fitting them together according to a pattern which is clearest when viewed as a whole.
This is why the ServiceLocator block from Microsoft makes me angry. While you can still configure the composition of your application using your IoC of choice, ServiceLocator exposes it as if it were a simple abstract-to-concrete mapping operation. And beyond that, the base idea of a centralized "service locator" completely undermines the goal of IoC in the first place, which is to invert the control. Don't ask for what you think you want. Just expose the dependency declaratively, and let it be provided to you as deemed appropriate by someone whose whole job is to know what you need.
I say we should embrace the composer idiom. I would love to see an IoC framework that took this role to heart, and wore it on its sleeve, so to speak, by expressing it naturally in its API. Autofac comes closest of any container I've used or researched, but it still shows some artifacts of its container heritage. One big improvement that could be made is a declarative configuration-time way of establishing when new scopes should be spun up. I posted a gist to this effect over the weekend and shared it with Krysztof Kozmic, hoping he'd take some inspiration from it for Windsor. But in truth, Windsor also still has a lot of other aspects that are tied to its identity as a "container".
Maybe it's time for someone to take another crack at the IoC problem. The ideal functionality is mostly all available out there. All that's lacking is an API that makes its true role obvious and natural.
Tuesday, May 3, 2011
This weekend I tweeted three simple rules for determining if a class's responsibility has been properly assigned. Class responsibilities were on my mind for a significant portion of the day on Saturday. It wasn't anything conscious or intentional. Rather I think it was a sort of run-off from a lot of responsibility talk over the course of the previous week.
Lately I find that whenever someone asks my opinion about a particular design decision, the question of responsibility is the first place I go. It's far and away the most effective and powerful thing to ask, as it easily gets the most guaranteed bang for the buck.
At both the class and the method level, the most important factor in deciding where and how to implement a particular bit of functionality is whose responsibility it is. Likely, the whole picture of the task is made up of several bits of responsibility that belong in several places. If your application is architected in such a way that responsibilities are well isolated and assigned to layers and clusters of classes, then breaking the task down into those constituent responsibilities gives you a draft blueprint for how to get the job done.
If your application is not fortunate enough to have such a clear and well articulated architectural scheme... well I'd advise you to make it a goal. Start by learning about onion architecture. But in the meantime, try to keep in mind the rules of thumb below and evaluate your classes against them as you work. If they don't measure up, see if they can be improved incrementally as you work. Leave them cleaner than you left them.
Now, without further ado, the rules.
- If you can't sum up a class's responsibility in 2 sentences, shift your level of granularity. It almost certainly has more than one responsibility, and you need to think hard about why you've combined them.
- If you can't sum up a class's responsibility in a single sentence, then you have lots of room for improvement. Don't be satisfied until you have a clear, concise statement of the class's responsibility.
- If you can sum up a class's responsibility in a single sentence, go a step further and think hard about whether it has another unstated or assumed responsibility. Unit tests can really help here.
The message here is not subtle. Every class needs to have a clear answer to the question "what is my responsibility". This is, in my experience, the most important factor in establishing clean, clear, composable, testable, maintainable, comprehensible application design and architecture. It's not the end of the road, but it's a darn important step on the way.
Tuesday, April 19, 2011
Note what I just said. DRY is "a fundamental rule of programming." I suspect very few people with appreciable experience would reasonably and honestly dispute this claim. But a rule is a funny thing, and a fundamental one doubly so. Rules are a first-pass at quality. They establish a minimum level of ensured quality by being a reasonable course of action in the vast majority of cases, at the expense of being a sub-optimal course of action in many cases, and even a plain bad option in some.
I am by no means saying that DRY is a bad thing. I'm not even saying not to do it. But I am saying that applying DRY is a beginner's first step toward good design. It's only the beginning of the road, not the end.
Eliminating repetition and redundancy is a very natural and obvious use for programming. It's especially hard to deny for so many people who slide into the profession by way of automating tedious and repetitious manual labor. It just makes sense. So when you tell a new programmer Don't Repeat Yourself, they embrace it whole-heartedly. Here is something that a) they believe in, b) they are good at, and c) will automatically achieve a better design. You'd better believe they are going to pick up that banner and wave it high and proud.
This is a good thing. If you can get your team to buy in to DRY, you know you'll never have to worry about updating the button ordering on that dialog box only to find that it hasn't changed in 3 scenarios because they're using a near identical copy in another namespace. You know that you'll never have to deal with the fallout of updating the logic that wraps up entered data for storage to the database and finding inconsistent records a week later because an alternate form had repeated the exact same logic.
What you might run into, however, is:
- A method for rendering currencies as user-friendly text which is also used to format values for XML serialization. The goal being to keep all text serialization centralized regardless whether it's meant for display or storage.
- Views, controllers, configuration, and services all using the same data objects, regardless of the dramatically different projections necessary for every operation. This in the interest of avoiding redundant structures.
- You might find that your views, controllers, and persistence layers all depend directly on a single tax calculation class. Of course here the goal is very simply to centralize the business logic, but actively works against establishing a proper layering in the application.
DRY focuses on behavior. That is to say algorithms. "Algorithms", along with "data", are the fundamental building blocks of applications. They are the substance our work is made of. But trying to build complex software while thinking only or primarily at this elementary level is like trying to map a forest one tree at a time.
At some point, the programmer must graduate from the rule of DRY to the nuance of responsibility. The Single Responsibility Principle (SRP) is the cardinal member of the SOLID family. It's premier position is not simply a coincidence of the acronym. It's also the next most important rule to layer onto your understanding of good design.
Responsibility is about more than function. It's also about context. It's about form and purpose. Identifying and isolating responsibilities allows you to take your common DRY'ed functionality, and cast it through various filters and projections to make it naturally and frictionlessly useful in the different places where it is needed. In a very strict sense, you've repeated some functionality, but you've also specialized it and eliminated an impedance mismatch.
Properly establishing and delimiting responsibilities will allow you to:
- Take that currency rendering logic and recognize that readability and storage needs present dramatically different contexts with different needs.
- See that the problem solved by a data structure is rarely as simple as just bundling data together. It also includes exposing that data in a form convenient to usage, which may vary dramatically from site to site.
- Recognize that calculation is worth keeping DRY, but so is the responsibility to trigger such calculation. It can be both triggered and performed in the business layer, and the result projected along with contextual related business data to wherever it needs to be.
Tuesday, April 12, 2011
Everyone is telling you "this is how you do it" or "this is how we've always done it". You think you can follow the rule, but you don't think it will be pretty. What do you do?
I have witnessed many people in this situation make their choice. Most commonly they do whatever it takes to follow the rule. If the peg doesn't fit, they pound it till it's the right shape. If the cloth won't cover, they'll stretch, fold, and tear creatively until they can make a seam. Somewhat less often, they just toss out the advice and do it the way they are comfortable with. Often that just means "hacking out" an ad hoc solution.
In either case, the person learns nothing. They will repeat their struggle the next time they face a situation that is ever so slightly different. They will sweat, stress, and hack again. Over and over, until someone shows them a new rule to follow.
As students we are taught to learn first by rote, then by rule. If we are lucky, we are eventually tested on whether we have learned to think. And most commonly we manage to slog it out and do well enough to pass without actually being required to think. It is so very easy to become comfortable with this model of learning. A particular someone, vested with the responsibility of fertilizing our minds and nurturing the growth of understanding, knows the answers. We don't understand, but someone can at least tell us when we are right or wrong, and often that's enough.
We settle into dependence. And by settling, we establish roadblocks in our path before we even set foot on the road. By settling, we refuse to take ownership of our own knowledge. We put ourselves at the mercy of those who know more and are willing to share of their time and understanding to help us overcome obstacles of our own creation.
This situation is not inevitable. You can avoid dooming yourself to toil under it with a very simple determination. But make no mistake, this simple determination will require determination. When you face the prospect of doing something that you do not understand, stop, take note, and ask "Why?" Refuse to continue on with any course of action until you know why you are doing it.
I'm not talking about questioning authority here (though that's good too). I am advocating understanding. If you think there's any chance you may have to face a similar situation again, then as a professional developer it behooves you to understand what you're doing this time, and why. This prepares you firstly to defend your actions, and secondly to tackle similar but different problems in the future.
By recognizing the reasoning behind a particular prescribed course of action, when you encounter a similar situation in the future you will be able to identify the subset of the problem that is subject to the prescription. Seeing this allows you to conceptually distinguish that part of the problem from the rest. From this vantage point you can decide whether the remainder is just noise that has to be accommodated, or something more significant. You will be able to start to consider whether there is another layer or dimension to the problem which might be better served by a different or additional pattern. You will be able to think, intelligently, and intentionally, about the problem both as a whole, and in part.
Lack of mindfulness is the scourge of intellectual pursuits (and a great many other things in life). Whether in programming, in health, in investment, etc., it binds you to the service of rules and systems. It puts you at the mercy of those who have understanding, and under the thumb of those who own the systems. Benevolent or otherwise, do you really want your own success and satisfaction to come at the whim of someone else, for no other reason than that you couldn't be bothered to put in the effort to understand? Do you want to spend your career tromping the same grounds over and over again, never coming to any familiarity or understanding of the landscape?
Always ask, "Why?" Then take the time to understand. Always be deliberate and intentional about applying any solution. Don't just follow the directions of the crowd, or some authority. You're not a patient taking orders from your doctor. This is your domain. Own your knowledge. Your future self will thank you.