Tuesday, January 28, 2014

You Are not an Island

Yep, today I'm going to bang that drum again. It's been a while. And as I sit on Twitter and daily watch the struggles of those who want to learn more than their education and immediate professional environment can offer, I think we are all in need a reminder.

Programming seems to me to have some strong similarities to medicine and law. In these professions, schooling is understood to be only one part of the necessary education. There is a formal science (or with law, a formal-ish system) underneath, but the work being done on top of that substrate is often creative, intuitive, or even artistic. The skills and experience required to work that magic aren't learned in the classroom or from a book, but rather from a combination of doing the work, and of guidance from those who've walked the path before.

But programming as a pursuit still seems to lack something that law and medicine have: a fairly consistent pattern of education and growth as part of a group. There is essentially no such thing as a physician autodidact. And though it seems possible to be so in the realm of law, it also seems essentially non-existent.

In the realm of programming, autodidacticism is not only common but admired and encouraged. Some glorify it as the only way to produce the best of the best. Depending on who you ask, formal education is viewed alternately as a way to shore up foundations and push beyond personal walls, or as a waste of time and money that closes more mental horizons than it opens.

I find this terribly unfortunate. While it's true that many people feel they "learn best this way", and some of them probably do, there are inarguable benefits to a more planned education, group study, and most of all to true mentorship. Planned education helps to set a smooth ramp of learning. Group study provides an exchange of ideas and a moral support network.

But this kind of learning culture is extremely rare to find in the programming world. There is a spirit of independence and individualism in the programming community at large that I think not only suppresses diversity, but also hinders our growth individual practitioners and holds back the state of the art in the professional community.

Yes, this personal trial by fire highlights strong individuals and purges dross. But is it worth it to lose so many with potential, and so many that merely require a different sort of path? The fewer quality programmers there are, and the more homogenous their background, the more limited the potential of the craft as a whole.

No developer should be an island. The webs of knowledge we see sprouting up with Twitter and developer conferences like CodeMash or That Conference is a great start, but it's not enough. As a community we need to find a better way to bring up our amateurs into professionals. In the meantime, as individuals we can only do ourselves a favor by seeking out mentorship, and finding trusted colleagues andestablishing a shared growth with them.

Tuesday, January 21, 2014

Bespokists vs Gluers

Another in a long line of posts eschewing JavaScript frameworks has hit the web and the community response has been predictably dichotomous.

Being a programmer, I of course have an opinion on this. But I try to retain a bit of nuance when looking at questions like this.

I think it's only fair that we acknowledge that the Moot folks are being nuanced as well. I mean, they're not advocating total vanilla JS where everything is bespoke, artisanal, hand-crafted code. They do use libraries. They're just avoiding frameworks. But the argument can often take on a similar form. To the average developer someone eschewing frameworks appears very close to a Luddite.

The arguments against this position are fairly simple and obvious. Why take on extra work when someone else has done it for you? Why reinvent the wheel? Why waste the effort, experience, and wisdom that are baked into a framework? Every discipline has tools, it's foolish to insist on working with your hands when a power tool gets the job done faster, and often better. Et cetera.

But again, the position isn't to avoid all prepackaged code. It's to avoid a particular type of abstraction that takes away responsibility from the developer for a whole class of design decisions. Choosing a framework makes an opinionated statement, whether intentionally or not, about what the structure of the solution should look like. The problem, as identified by Moot and their compatriots, is that it this delegates the details of that statement to a third party, relinquishing a large degree of design control that could have a dramatic impact on the evolution of your application.

Realistically speaking, eschewing all frameworks means you have a lot more to do to reach the same point as a team who starts off with a framework. The difference is that you'll be able to tailor your code to your needs in nearly every aspect (with the qualifier "given enough time" often going unstated.) The framework users on the other hand will be taking on a lot of noise that can very easily obscure the core points of the solution they are building for the problem they are tackling. It can make it harder to look at the codebase and pick out what exactly is being accomplished by all the code, because most of the hand-written code is gluing different things together. The thing that is most prominent is not the solution, but rather the structures mandated by the frameworks.

These are all great points. The problem I see with it is that the trade-off isn't usually truly between slow, steady, and fitted versus rapid, noisy, and constrained, as it's represented to be. A huge factor in how these different strategies play out is time available, and team skill level. Frameworks may create noise, and impedance mismatches, and encourage you to fit your code into structures that weren't meant to fit your specific problems (e.g. not every screen benefits from MVVM, as opposed to MVC, or Passive View). But having no frameworks means you need to have the experience and wisdom available on your team to know how to put together the patterns you need, and when to use what. Not to mention that building infrastructure to streamline these things is not exactly a trivial task either.

There's a place in the world for folks who want to lovingly hand-craft every line of code that they can. But it's likely always going to be a niche. These are often the folks who explore and push the boundaries, finding new economies and elegant simplicity, or even new patterns. But for the rest of the developers, frameworks are tools. If they help deliver features to customers, then they aren't going anywhere, and there's nothing wrong with that. Realistically, no one thinks that framework X is the alpha and omega of solutions to problem Y. This too will pass and be improved upon.

The best and most experienced engineers will, absent other constraint, likely choose to start somewhere in the middle.

Tuesday, January 14, 2014

Opinions

Opinionated frameworks are pretty popular right now, with good reason. They take a set of good, proven decisions, and just make them for you. Take the dilemma of choice and the risk of building naive infrastructure or inconsistent conventions right out of your hands. An opinionated framework doesn't (necessarily) claim to be doing things "the right way", but any advocate will tell you that the reason they're so great is that they do things in a "good way", and make it hard to screw that up. It sounds like a great pragmatic compromise.

Unopinionated frameworks are also fairly popular right now, with different folks, and for different reasons. Sometimes, people get sick of the golden handcuffs of an opinionated framework that didn't imagine you'd need to do X and has no way to gracefully take advantage of any of the offered functionality to do so. Or you start to realize more and more that the "happy path" the framework sets you on isn't really so happy, and tweaking the assumptions would mean you'd write more testable code, and less of it. But you can't, because that violates the opinions of the framework, and so you're stuck. So next time, you vow to use an unopinionated framework that gives you a nice solid substrate to build on. Good abstractions over the layer below, lots of conveniences for otherwise finicky code, and the freedom to build whatever patterns and infrastructure you want.

I have to admit that for the past couple of years I've been in the "unopinionated" camp. I had some bad experiences with opinionated frameworks making it difficult to take advantage of a mature IoC framework like Autofac. Then fell victim to one that was actively standing in the way of using a slightly different structural design pattern. When I'd had my fill of these frustrations, I ran straight into the waiting arms of unopinionated frameworks.

And I bought the sales pitch on those too. Isn't it great to be able to choose whatever pattern you want? Isn't it wonderful to be able to build all the infrastructure you need to streamline your dev process? Isn't it rapturous to be able to separate responsibilities into as many classes and layers as you need to make things testable? Well, yes it is.... and yet, it also kind of sucks.

Emergent architecture is great. But having to roll it all from scratch really, really sucks. It's a burden and a weight on the team. Especially when you have inexperienced devs. The last thing you want to happen is for features to be stuck waiting for someone to have time to build the right infrastructure. And only slightly more desirable is to pile up masses of code that could be cleaner and simpler if only the infrastructure or automation was in place in time.

So the shine has come off the penny a bit for me. When you use an unopinionated framework, even if you have your own opinions, there's a good chance you're going to get caught in a morass of infrastructure work that will bottleneck teammates efforts. Worse, it adds to the complexity of your application, piling up layer on layer of non-feature code that has to be tested, fixed, and evolved over time.

But I still love unopinionated frameworks. It's just not for the reasons they are usually trying to sell you. I don't really want to trade in one set of handy conveniences and annoying constraints for the freedom of infinite choice burdened by a whole lot of finnicky infrastructure backlog. Depending on your situation, this might be an okay trade-off, but it might also be really unwise. No, instead, what I love about unopinionated frameworks is that they give you a sandbox in which to experiment with better opinions.

Whether that means building something from scratch, or using add-ons or plugins that the community has already put together, the point is to think, play, experiment, and build something better. Not necessarily to build something different every time, or go back to cozy NIH syndrome.

The truth is, opinionated frameworks are great. They really do offer great efficiencies when you buy in and follow the happy path as best you can. When the going gets tough and you realize the tool isn't cutting it anymore, it's time to look for different opinions. So see what else is out there. See what new opinions people are forming in modular fashion on top of their unopinionated framework of choice. Maybe even build your own opinionated framework around those better opinions.

Because in the end it's not the presence or absence of opinions that matters. Heck it's not even really the quality of the opinions. It's the trajectory of the quality over time. And whether you're clutching tightly to an opinionated framework, or emerging every aspect of every project's infrastructure, you're probably spending too much time writing code and not enough time thinking about how that code could be better, simpler, and easier for your team to write and maintain.

But what do I know? That's just, like, my opinion, man.

Wednesday, January 8, 2014

How do you know what the right tool is?

Bob: "Use the right tool for the job."
Alice: "How do you know what the right tool is?"
Most programmers aren't typically building things from the bare ground up. We're also not building castles in the sky. We use libraries, frameworks, and platforms to establish a foundation on which we build a thing. These are all tools, and unfortunately the programming profession's standard toolbox contains mostly conceptual tools. Beyond that, nearly everything is subject to replacement or customization. Whether it be a UI pattern framework, or an ORM. So how do you choose?

The advice to "use the right tool for the job" not constructive in terms of actually finding the right tool for the job. It's a warning to consider the possibility that you might not be using the right tool right now. And in my experience it's almost uniformly used to warn people away from over-engineering things or building custom infrastructure code.

So again, how do you choose? Alas, like so many things in this world, choosing well requires experience. Not just experience with the tools in play, but with the problem space you're attacking, and the constraints around the system. This disqualifies most developers from choosing the "right tool" up front, in all but a narrow band of problems to which they've had repeat exposure.

It's not always about "speed" or "simplicity". Even those metrics can be seen in different ways. Is it more important for structures to be consistent, or to eschew noise? What does it mean to be "fast" under a restrictive deadline that's a month away? How about a year? Are your developers more comfortable customizing infrastructure code to reduce overhead, or following a plan to reduce risky decision-making?

Every tool fits these situations differently, and it almost never has anything to do with whether it uses MVC, MVVM, or something home-rolled. Evaluating tools effectively requires you to at least think about these questions, as well as about what the framework does or doesn't do or what design decisions it leaves on your plate. Minimally you need to consider not just whether the tool can do the job you need, but how it does it, as well as how well it "fits your hand".

Most tools I've encountered fall into one of the buckets below, which I find very helpful in "choosing the right tool". The important part is to sit down and really think about both your needs, and the tool's trade-offs.

  • Good for modest jobs that have to be done quick, trading design and decision effort for speed out of the gate.
  • Good for long-lived software that will see many iterations of support, maintenance, and evolution, where incremental speed is traded away for flexibility and stability.
  • Good for unfamiliar territory, providing an easy on-ramp at the expense of graceful handling of edge-conditions.
  • Assume a particular structural pattern, taking a lot of boilerplate and/or automation work off your hands, but locking you into that structure even when its an ill fit.
  • Good for experimenting with new patterns or varying existing ones, trading ready-made automation and assistance for a firm but unopinionated substrate.
  • Investments that pay dividends on team familiarity and expertise, trading a graceful learning curve for raw configurable power.

Every tool is unique, trading away these conveniences for those. Sometimes you'll know what this means, but often you won't. You'll never really know before you use a tool if those trade-offs are going to help you more than hurt you. Just like you don't really know what will be the hardest challenges of a project until you get to the end and look back. Get comfortable with this fact. Don't let it chain you to a single tool just because it's a known quantity or you're comfortable with it. Odds are good you're going to run into a job sooner or later where that's not the biggest concern.

So experiment, learn, and reflect on both the tools and your own knowledge. It'll help you make better choices, and not just for tools.


Tuesday, December 31, 2013

Year in Review

As a programmer who deals with the typical stress of imposter syndrome and the tech skill treadmill, I've long had an ever-present guilt about how I spend my free time. It used to be that at the end of the year, I would look back at what I spent my waking time on, and the next thing after work was usually video games, or a combination of games & TV, by a landslide. This was discouraging, because I couldn't avoid thinking about what I was missing out on, professionally, because that time wasn't spent on experiments or projects or freelancing or business ideas.

That changed this year. Don't get me wrong, I still struggle with choosing between "relax" or "be productive" when I find downtime. I still get discouraged when I think about just how long it will take me to get good at something new, or build the app I want to sell. But in reflecting on this year, I don't have regrets about how I spent the vast majority of my waking non-work time.

So what's different? What did I change?

Well, in 2012, my wife and I had a daughter. And in 2013, the activity I spent the most waking non-work time doing was, without a hint of a contest, being a dad.

That's a skill that I'll never struggle with letting get rusty, and it's a passtime that I'll never wish I'd done less of. It's something that I don't even need to think about to know that I'm making the right choice with my time, every time.

And as we sit here on December 31, 2013, on tenterhooks waiting for the arrival of our second child literally any hour now, I've never been more certain that my life is on the right track.

Happy New Year, everyone!

Bad Tests vs. No Tests vs. Never Tests

Bad tests are worse than no tests. But never trying to test is far, far worse than writing bad tests.

Bad tests will exert a friction on your development efforts. They cost time to write, they cost time to maintain, they don't provide clear benefit, and they may even guide you to bad design choices. Under the desperate pressure of a rock-and-a-hard-place deadline situation, if you don't know how to write good tests it's often better to just not write them. You're naturally going to have more churn in the code, and removing the burden of testing may actually make the churn less disruptive and the code more stable.
Sidebar: "The desperate pressure of a rock-and-a-hard-place deadline situation" is a place you don't want to be. It very rapidly becomes an "unsustainable no-win situation." In the long term, the only way to win is not to play. In the short term, you do what you can to get by until it's over, and hope you'll be allowed clear the mess before moving on.
The key to Test Driven Design is that good tests drive good design. Bad tests don't... in the short term. Bad tests must be suffered, and examined, then removed, and rewritten. Bad tests must be improved through iteration, in tandem with the code that they test. Bad tests are learning tools to help you write good tests, which help you write better code. But if you don't have the time necessary to reflect, iterate, and learn from your tests, all you'll get from them is a creeping bitrot and a demoralizing stink.

The moral of this story is not that you shouldn't test. It is to recognize that tests are not inherently good. They are a tool with a particular purpose, and they must be used properly and given room to do their job. If you can't give them the room they need, then they're not the right tool for your situation.

Yeah, that's a situation that you wouldn't have in an ideal world. Maybe you should say something about it. Maybe you should find a new job. Or maybe it's just a tight spot and you need to fight through it. But for goodness sake, don't make a bad situation worse by imposing a burdensome constraint that you can't benefit from.

Tuesday, December 24, 2013

The Humble Pride of the Professional

Among many other attributes, a professional is someone with enough pride of craft to insist on doing the best work that they can muster, and enough personal humility to accept that it will still often be wrong so they can learn from the failure.

This is necessary, but not sufficient.

Tuesday, December 17, 2013

A Guidance Anti-Pattern

Be suspicious of guidance that spends a large proportion of time addressing concerns of structure and organization, compared to what it spends discussing how to solve problems and make decisions.

Structure, organization, and naming are all shallow concerns that are easy to grasp onto and remember. They are easy to pass on. It is easy to attribute success to them through a very rational, but shallow analysis. And because they are easy, they get the focus and attention, while the kernel of wisdom that inspired the guidance is obscured, languishes, or is forgotten.

In my experience, guidance that focuses on shallow trappings like structure, organization, and naming tends to be the result of shallow thinking, and worse, tends to encourage shallow thinking. It espouses ideals that are theoretically beneficial. Often it will describe what it protects you from, but usually it offers scant evidence of how it helps you solve problems. And it might just make you feel like you're doing it wrong if you feel constrained by it.

Instead, prefer guidance that shows you how to think, rather than how to constrain your options. Prefer guidance that walks you through how to solve a problem, and shows you why thinking about a particular set of things will help lead you to a clean solution. And hold dear guidance that openly admits when it doesn't apply, and tells you why.

To say this all more elegantly:
Don't seek to follow in the footsteps of the wise. Instead, seek what they sought.
-- Matsuo Basho 

Monday, June 24, 2013

Context loss via Implicit Information

A post model is not a view model. A form view model is a subset of the post model. Even when they are structurally the same, the roles are different, and it's highly likely that changes will exert a pressure to evolve the details of those roles separately.

Here's the problem: If, at the point that your view model and your post model happen to be structurally identical, you decide that it's "overkill" or "duplication" or "unnecessary complexity", and just use a single model, then you make those two roles completely implicit, and model a scenario that is only circumstantially true. A less experienced developer will come along, and absent the experience to know that there are two roles in one place, evolve the single object in-place rather than split the roles out. They don't know any better, and you have hidden from them the information that would indicate to them the better way forward.
In short a data object is incomplete without its context. Its responsibility, in terms of the Single Responsibility Principle, is naturally coupled from its context. This isn't a bad thing, as long as you recognize and respect that a data object has no standalone identity.

So how do you respect this symbiotic state of affairs? In a strongly-typed language it's very simple: by giving it its own, layer-specific type/class. This way it will become blatantly obvious when a data object has gotten too far from home and is in danger of being misused or misinterpreted.

In a loosely-typed language, you respect the coupling by cloning or mapping a data object as it moves between layers. Since there's no type system to stop you, it's extremely easy to just pass the same thing around until a clear requirement of change appears. But the very fact of the data object's implicit role means that the developer will have precious few clues to this. By always cloning or mapping into a new object the potential damage is mitigated, and the developer gets a clue that something is being protected from external modification. Hopefully this distinction of identity encourages the developer to make even small changes to structure and naming that ensure that the object is always expressive and clear of purpose relating to the context of its usage.

Recognizing this is a nuanced design skill, and it's one that people often resist in order to avoid adding more types to their programs. But it doesn't take long to reach a level of complexity where the benefits of this care and attention to expressiveness and signalling become clear.

Tuesday, January 17, 2012

You'll POCO Your Eye Out!


POCO
noun
1. A Plain Old CLR Object that does not derive from a base class, or implement an interface, which is defined as an integration point for a framework.

Victory

Infrastructure frameworks often brag these days about being POCO-compatible. This is in some ways a victory, because it used to be quite the opposite. In order to use an ORM, or an MVC framework, or really anything involving a "model", you needed to inherit some framework base class before it would recognize and respect the role of these objects in your application. But we're past that now. Today you can use POCOs even with frameworks that come directly from Microsoft, like the Entity Framework.

"Convention over configuration" is a similar principle of framework and infrastructure design that has been lauded for a few years now, and is finally hitting more mainstream shops. It trades on the ability to reduce the number of design decision points, allowing you to write less code, and do it faster. In other words, why take all the time and effort to deal with a complicated configuration mechanism capable of expressing myriad options, and then a complicated engine capable of processing all those options, when any one project is only going to use a slice of it all. With convention-based frameworks, just follow some simple rules, and let the framework handle all the details.

Tradeoff

For a long time, I was really happy about this. But today I realized that for all the benefits of simplification and boilerplate elimination, there are some pretty serious tradeoffs being made. Yes, the benefits can be absolutely huge. The gains in consistency, speed, reliability, and flexibility can't be overstated. It saves you from thinking and worrying about a lot of things that are fairly arbitrary in the context of the larger business problem.

Yet for all my appreciation, some shine is starting to come off this penny for one big reason I like to call "context-smearing". Both POCOs and conventions gain their structural homogeneity and rapid development by diluting object responsibilities, and trading away discoverability and expressive design. They take contextual info that ought to be localized, and smear it out between layers, across multiple or even many different locations in code. I am becoming increasingly careful about making this trade-off. I have seen the price it exacts, and I think there are definitely cases where that cost is probably not worth the gain.

Danger

So the big problem with both POCOs and conventions is a loss of context. What's so bad about that? In a word: coupling. Regardless of the absence of tangible framework attachments like base classes and interfaces, your POCOs and conventional objects are still tightly coupled to the framework. But instead of looking at the object and seeing the coupling, it is made almost completely invisible.

When you write a POCO that a framework needs to process, it needs to be simple. Anything too complicated and the framework won't understand it or will make an assumption that doesn't hold, and then the whole thing comes crashing down. Under this pressure your POCOs get dumber and dumber. Unless you're very diligent, before long they are not much more than simple property bags. And a bag of properties is no model. It's just a bunch of data.

With the behavior driven out, there's no context with which to understand the meaning of the data, the relationships, and the references. There's no indication why this property is reproduced over there, or why that container seems to exist solely to hold a single other object.  Instead, this knowledge is encoded into all the places where the objects are used. It is distributed and diffused throughout the application, so that in no one place--or even handful of places--can you look to see the true meaning of what that object's role is.

The consequence of this is that points of consumption become fountains of boilerplate and redundancy in your code. At least it's consistent, though. So naturally you'll turn to conventions in order to scrub away some of this homogeneous noise. A good convention-based framework can look at your model, look at your usage, and determine all it needs to know to glue them together. This really streamlines things. But soon enough you've compounded the problem, because conventions actually encourage the exact same diffusion of context. Only conventions grind context down to an even finer sprinkling of dust, and then scatter it on the wind.

It should be clear enough why this is dangerous. It's very easy to slip from here into what John Cook calls "holographic code", which he describes far more elegantly than I have here.

Mitigation

In my opinion, none of this means you should shun POCOs and conventions. Heck, I'm even working on a convention-based Javascript framework of my own called copper.js. They do provide real benefits, especially when the ability to iterate rapidly and share coding and design responsibilities is more important than hi-fi business modeling. Both are highly effective tools in the context of such priorities. And lets be honest, that's the environment that most programmers are working in.

But take the sales pitch with a grain of salt. Conventions will make some of your code simpler, but they can also easily obscure the problem and the solution--or even just the actual model--behind the patterns mandated by the mechanism. An experienced "framework X developer" can see right past that noise. But should we really need to be framework experts just to understand the business problems and solutions? Solid and inviting framework documentation can go a long way in helping people see the signal in all the noise.

Likewise, POCOs are tremendously light-weight, quick, and flexibile. This commonly comes at a cost of design confusion or opacity of purpose when seen separated from their points of consumption. Establishing context is crucial in avoiding confusion and misuse. POCOs should be small and focused, not general and reusable. And they should be located as close as possible to the consumers.

Acceptance

While the problem could get very bad if you're not careful, it's rarely inevitable. And the speed you gain for the 80% case can often be invaluable. You just need to be aware of the pathologies in play, and take steps to mitigate them. If you're lucky, wise decisions on the part of the framework designers can reduce the number of compromises you have to make. But even more important--because it's actually under your control--wise decisions on the part of your development team can contextualize the anemic and opaque bits, limiting their reach.

My advice is to be aware of the problem, and tackle it head-on. Whether by careful coding or clear documentation, be diligent in providing context where the frameworks will tend to suck it away.