03.14.10

Levels of Developers

Posted in Uncategorized at 2:32 am by Kyoryu

Jeff Atwood asks why programmers can’t, well, program.

I wrote that article in 2007, and I am stunned, but not entirely surprised, to hear that three years later “the vast majority” of so-called programmers who apply for a programming job interview are unable to write the smallest of programs.

To some extent, I think he’s being unfair, although I actually agree with him completely.  One thing I’ve noticed recently is that there seems to be a bit of a step function in developer ability.  Below this demarcation, developers seem to think of the value of their code entirely in terms of what they can get some library or another to do.  Their programming is almost entirely manipulating objects written by others.  Above this line, developers start thinking of the value of their code primarily being in what they write – and the code written by others that they have to deal with is, at best, an annoyance.

It’s not even entirely based on programmer ability.  There can be very good developers capable of doing a lot of things by that kind of manipulation.  There can also be developers that focus on their own code that are very much in the learning stages.  It almost seems to be more about scope, or level of abstraction, than it seems to be about effectiveness or experience.

Taking it a bit further, there seems to be at least 4 of these developmental “jumps” that I’ve recognized.  There’s probably more – and the fact that I can’t see them probably means that I haven’t achieved them myself, yet.

  1. Manipulation of code created by others.  Typified by “call this library, put the results into this other library.”
  2. Algorithmic ability – primarily defines programs by the data transformations done.  Library manipulation is assumed.
  3. Structural ability – primarily defines work done by the internal structure of the program developed.  Algorithmic ability assumed.
  4. Systems design – primarily defines value by the overall design of the system, spanning multiple programs.  Structural ability assumed.

So the problem with things like the FizzBuzz test, and the number of developers that can’t hack it, is that FizzBuzz is a “level 2 developer” problem.  Developers at the first level, why they might be great at a huge number of tasks, simply don’t have the tools to solve it, and don’t usually think it’s worth solving.  They also tend to think that problems like “find the least common multiple” are obscure math problems, and don’t get that it’s a test of algorithmic thinking.

A similar problem exists with design patterns.  These operate at the third level or higher.  A developer at the first or second level really has no use for them, or even really the capability to understand them.  Telling a developer at one of those levels that design patterns are “good” is just asking for trouble – without the required understanding, they’ll try to apply patterns to the wrong problems.

Again, this is not really a value judgement.  There’s  alot of value in experienced devs who are at that first level – since they value libraries written by others, they’ll tend to know the ins and outs of them very well, will know which ones are useful, and will be able to pull things together to make something blindingly fast.  If you want someone to develop a pretty typical CRUD app, that’s probably what you need.  Guys at the “higher” levels probably aren’t what you want – since they don’t care as much about other peoples code, they won’t be as up on the intricacies or latest libraries.  But, once you get off into areas that aren’t well-tread, that’s where you’ll start to need them.

So are things like FizzBuzz good tests for developers?  Depends.  If you really need a guy to tie some libraries together in a useful way, well, probably not.  You’re applying a second-level test when you need a first-level developer – you’re testing the wrong thing.  But at least it explains how so many programmers “can’t program.”

02.14.10

All programs should be compilers

Posted in General development at 9:26 am by Kyoryu

Been reading Steve Yegge’s stuff.  He’s smart.  You want him working for you.

This post reminded me of Rob’s Grand Theory of Good Object Oriented Programming (I’m the Rob in question, BTW).

If you’re asking what the hell compilers have to do with object oriented programming, read on.

Fundamentally, what a compiler does is transform data.  Each phase of the compiler makes a different transformation of the data.  Most steps will spit out the data in an entirely different form, which is neat because it means that any given phase generally doesn’t have to know about anything except for the data format it gets, and the format it spits out.  Phases are usually one-way.

A simple compiler might take in a text file, and transform it into a token stream (lexing phase).  Then, it will take the token stream and turn that into an intermediate representation such as an AST (parsing phase).  After that things get fun.  Usually you’ll then mess with the AST a few times, simplifying it, and then transforming it for perf reasons.  This may or may not be the same tree format.  Once you’re done with that, you’ll start the process of transforming it into something resembling the native format of the machine you’re actually compiling it to.

And so on, and so forth.

What we have here is a series of things that take data in, mangle it, and spit it out the other side, often in a different format.

So, again, what the hell does this have to do with object-oriented programming?

Well, it turns out you can model just about any damn thing you want using this programming model.  For instance, say you want to write a calculator program.  You might be tempted to write that by checking to see if a button was pressed, if so adding a value to an expression tree, and then calling calculate on the expression tree to get a value, and then setting the text on your output.  This could all be done as a single large method, but you’d probably realistically break it into multiple functions if not objects (though the actual code path would likely be roughly equivalent).

You could do that.  Or, you could treat it like a compiler.

In our calculator-compiler, we’d start with the input being the coordinate of a mouse click.  We would transform that into a symbol, much like a token in a lexer.

This token would then be passed into an expression builder (much like a parser).  As a result of this, our parser would then output a current value, which could either be a number, an error condition, or perhaps some other kind of message.

The next piece in line would take the output message and transform it into some kind of data structure representing formatted text (possibly including colors and whatnot).

And the final piece would take that formatted data structure and turn it into a bitmap to be displayed in our output area.

Now, I’d write the calculator program in such a way that each of the pieces I just described was a separate class (at least.. maybe more than one!)  That’s a lot of classes.  Why would I do this?

Well, it turns out that doing things like that has a lot of benefits.  First, it means that any of your code, except for the parts at either the very beginning of the chain (where you read the mouse input) or the very end (where you actually display the bitmap) can be tested in isolation.   That’s pretty cool.

Secondly, it means that except for data, no part of the system really knows anything about any other part of the system.  That makes changing things much, much easier.

Third, the system becomes almost entirely stateless.  And what state exists is completely localized to a single class.  State is only used to modify the outputs of a given class, not to be queried by other classses.

Fourth, locking becomes really easy.  A given class locks when it is modifying its local, internal state, and releases the lock before it calls the next object.

Fifth, multithreading becomes easy.  If this is all modeled as objects making void calls on each other, then you can make calls to objects on pretty much arbitrary threads.With no return value, and no real assumption of state modification, it gets easy.

Sixth, your error scope becomes smaller.  Since any given class is only responsible for a specific, defined data transformation, it’s really irrelevant what the next guy in the chain does.  It’s not really your responsibility.

Okay, but does this work for interactive apps?  Or apps with complex logic, or multiple inputs or outputs?

Yep!  It works fantastic!  It just means that your chain isn’t linear, that you have some areas where multiple objects talk to one, or where one object outputs data to many.  Hell, it even works in cases where objects can cause chains that result in calls back to themselves.

It doesn’t preclude more “typical” OO programming, either.  In fact, I often use more typical techniques when creating and describing data structures, rather than program flow.

One of the other interesting things that happens in this style of programming is that you often realize that what you’re doing is very analogous to creating an AST of your own.  Which means that you can, conceivably, create a generic form of whatever it is you have that wires your objects together that works on some input.  Some input that might be a tree-like structure, which could be generated from a token stream…

See where this is going?

There’s other advantages, too.  “Normal” OO programming often runs into a number of common problems.  First, you’ve got the efficiency problem.  Calling “User.Save()” will perhaps make a call to a database, and looping over 20 Users will result in 20 calls.  That sucks.  Looking at the problem from this approach, you’d transform that array of 20 pieces of data into a SQL query, which would then be passed to something which could hand it to the SQL server itself.

Another common problem in typical OO systems is increased complexity due to either over-generalization, or overuse of inheritance.  This problem typically manifests itself as code that jumps across  mutliple different classes through a maze of calls that is impossible to trace.  With the ability to draw strong demarcations of responsibility at object boundaries, the code inside a given object tends to be very specific.  Since a single object isn’t trying to manage the entire data flow of a single piece of data, you end up avoiding a lot of coupling problems.

By the way, I don’t think I’m really alone in this.  If you look at the GoF book from this perspective, a whole lot of the patterns make more sense than if you look at them from a more typical viewpoint.  Also, what I’ve just described is, in many ways, the Actor model that is starting to become somewhat in vogue.

It’s also works exceptionally well with test-driven development, given that 95% of your code ends up stateless, and not interacting with the system at all.

02.08.10

Why Use Cases Rock

Posted in Uncategorized at 4:02 am by Kyoryu

Typical agile-style use cases rock.  Really.

There’s a pretty common pattern for them, so common that it’s a part of the whole Behavior-Driven Development idea.  It’s just:  “As <role>, I want <feature> so that <benefit>.”

A lot of people seem to think that this is overly simplistic.  I respectfully disagree.  What this simple format does is tell us the most important things about anything we do, in a way that pages of details about the implementation don’t.

As <role>:  Tells me who the customer is.  And who the customer is tells me a lot about how I need to expose functionality.

I want to <feature>:  describes what the customer wants.  This is typically where we spend the most effort.

so that <benefit>:  The business value of what we’re doing – why it’s important.

Simplistic?  Maybe, but it does a much better job of reminding us of who our customers are, and what the business value that we’re trying to produce is, than any other format I’ve seen.

And, frankly, forgetting who our customers are, and not focusing on business value seems to be a far more frequent problem than not spec-ing features sufficiently up front.

07.16.09

Ranking vs. Prioritizing

Posted in Uncategorized at 1:44 pm by Kyoryu

Just finished writing up a list of some ’stuff’ for work (what it was is totally unimportant).  Part of this list included prioritization of the items, and for a first pass I went for three priorities, P1, P2, P3.

After writing the list, I noticed that, as typical for prioritization systems, I ended up with a bunch of stuff in P1, some stuff in P2, and almost nothing in P3.  Since I’ve seen this anti-pattern before, I decided that I would split them up into groups of equal size.

Boy, was that hard.  Even without complete stack-ranking, coming up with equally-sized groups forced me to think about which of the items were important, and why, in a way that just arbitrarily assigning priorities did not.

After doing this, I’ve become even more convinced that prioritizing is a waste of time, and that stack ranking in some form is, really, what is needed.  Prioritization alone does not force you to really think about your requirements and what is really important, and seems to inevitably end up with a “top priority is everything we think we’ll do, second priority is everything we’ll do if we have time, and third priority won’t get done” schema.

I don’t know if total stack ranking of every possible feature or work item is necessary, but I think that, at the minimum, you need to have small groups of equal size (probably no more than 5-10 items), and those need to be ranked.  This gives some flexibility in terms of not overly worrying about whether items 1 and 2 should swap (since they’re both likely to get done), but still acts as a forcing function in that you’re limited to ‘n’ top-priority items.

06.19.09

Robustness

Posted in General development at 12:39 am by Kyoryu

 

Robustness is one of those things that we can chase forever.  Many developers think that “robustness” means never crashing.  A more experienced developer will realize that there are many, many things worse than crashing.  Continuing to run while in an invalid state is a much worse option, as it opens up the possibility of corrupted data – a far, far worse problem than a simple crash.

Even past that, we have to look at error conditions that can occur, what compensating actions we can take, and what the impact to the user is.

There seems to be a few general levels of robustness in applications.

  1. In cases where no system failure occurs, and all input data is correct, the system should work.  This is the basic level of correctness.  Now, the catch here is knowing what the system should do for any set of valid input…
  2. User input should be appropriately validated and sanitized to prevent failure.  Again, sometimes you can’t just nicely recover, and the only thing you can do is throw an exception or other error code.  That’s fine.
  3. The program should continue to work in case of reasonable system failures – a file being open unexpectedly, a remote system not being available.
  4. The program recovers in the case of extreme faillures – out of memory, full hard drive, hard drive unexpectedly removed.  In many cases, catching these failures may not be worth the effort.  It is unlikely that you can do any reasonable recovery, and so doing minimal recovery to try to not corrupt any data, and then get out.  If you don’t know that you can even do minimal recovery, just fail and hope for the best.
  5. In the case of users undermining the system by deleting files you require, I don’t know that it’s even worth bothering.  If something you require is gone, you’re broken.  Don’t even try to run, exit as quickly as you possibly can to prevent data loss in the future.  This scenario is no different than somebody deliberately deleting files from the Windows directory.

And, that’s my view.  I’m sure some will disagree, but that’s fine.  Attempting to recover from an unrecoverable scenario that is unlikely to ever happen in reality, and if it does, will almost certainly be accompanied with other failures has little value.  It is likely that the time spent could be spent doing other things that will have a higher value to your consumers.

06.12.09

How difficult is it to change your code?

Posted in Uncategorized at 2:43 pm by Kyoryu

Change happens.  We’re not perfect.  We don’t know everything.  We invariably learn something about the project we’re doing while we’re doing it.

So, code that can easily be changed is better than code that can’t be changed.  But how do we know how easy it is to change code?

Difficulty of changing code is best measured at the class/interface level.

If you have a lot of classes that are each easily changed, you will be able to change your code easily.  If you have a few classes that are each difficult to change, it will be difficult to change your code.  Even if the total work done is the same.

The primary measure of difficulty of changing code is the number of consumers it has.

The more things that know about your code, the harder it is to change.  This is a reason why the universally-loved concept of “the one place that does all X’ almost always fails.

It is easier to change an interface that has one consumer than one with three consumers.  It is easier to change an interface with three consumers than one with twenty consumers.  And if you have one hundred consumers, forget about it.

By consumers, I do not necessarily refer to applications or individuals, rather I refer to classes that refer to any individual class.

It’s easier to change internal code than public-facing code.

This is a restatement of the first point.  Published, public-facing code has an infinite number of consumers, making it nearly impossible to change.

The more implementation details you expose, the harder it is to change your code.

Public-facing code should, as much as possible, not leak implementation details.  It should reflect the user-facing concepts that are being exposed, and not the implementation details of those concepts.

Not exposing raw types is a good way to do this as well.  If you have a user ID that’s an int right now, it may be somewhat painful to change it to a long later.  However, if you wrap the int in a UserID class, changing it to use a long or even a GUID will become much, much easier.

The more scenarios you support, the harder it is to change

If you support a large number of scenarios, it is almost inevitable that assumptions needed for one will spill over into others.

The more well-defined your code is, the easier it is to change

If your code has well-defined inputs and outputs, and doesn’t have side effects, it is much easier to change.  While the public entry points may remain difficult to change if they have many consumers, any internal details can be changed arbitrarily, and the correctness of the final code can be verified.

On the other hand, if the behavior of the code is not well-defined, then changing it can become extremely difficult, as consumers may be relying upon existing behavior that is either in error, undocumented (and so likely to change if you touch the implementation), or simply a side effect of the “real” work being done.

06.09.09

Checked Exceptions

Posted in Uncategorized at 11:57 pm by Kyoryu

Interview with Anders Hejlsberg

This seems to be somewhat of a controversial subject.

On the one hand, we have Java, which forces exceptions to be caught and potentially rethrown.  This is, certainly, something of a pain.

On the other hand, C# doesn’t require anything, and any method can potentially throw any kind of exception.

I can see the points on both sides.  Nothing is  uglier than a bunch of arbitrary try/catch statements in code that do nothing more than rethrow exceptions.  And just blindly swallowing exceptions is even worse.

On the other hand, not really knowing what a method might throw in C# can be really, really annoying at times.

Let’s start with versioning, because the issues are pretty easy to see there. Let’s say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception.

Well, that’s certainly reasonable.  But, I have to wonder if it’s the right answer?  If you add a bunch of functionality to a class, is it perhaps better to make some new ReallySpiffyFoo class that contains the new functionality, and leave the existing class as it is?

Then again, I’m not a huge fan of growing classes over time – in most cases, I believe you’re better off leaving a well-defined class as-is except for bug fixes, and putting new functionality into a new class (which might internally use the old one).

Now, each time you walk up the ladder of aggregation, you have this exponential hierarchy below you of exceptions you have to deal with. You end up having to declare 40 exceptions that you might throw. And once you aggregate that with another subsystem you’ve got 80 exceptions in your throws clause. It just balloons out of control.

Another reasonable point.  However, I’d tend to believe that in a case like that, you’ve got a bigger design issue at play.  Why in the world would a business object throw a FileNotFoundException or the like?  At most, it should throw something like a CouldNotLoadDataException.  The fact that the data was to be loaded from a file is completely irrelevant at that level.

I also suspect that Anders is looking at this mostly from the viewpoint of a language and framework developer.  As a framework developer, he expects code he writes to be called by other people, and they can certainly look up what exceptions are being thrown.  That’s reasonable.

However, if I’m using an interface as an extensibility point, it’s a slightly different story.  Now I’m importing someone else’s code into my application, and I have no idea of what it might throw when I call it.  If that’ doesn’t sound scary, I don’t know what would.  At this point my options are either let my app crash when I make any arbitrary call, or catch Exception directly.  Neither of those are, in my mind, really good solutions.

What I’d like to see is defined exceptions, but not necessarily checked exceptions.  I’d like to know what exceptions a method may throw, but I don’t want to necessarily be forced to catch them.  In my mind, the exceptions you throw are effectively part of your API, especially when looked at from role-based interfaces for extensibility rather than header-style interfaces.

If I define an operation in an interface, I’m basically saying that I expect to be able to make this call, with certain parameters, and get a certain type of result back.  As part of that, saying that I expect to throw (or will throw) certain exceptions is part of the definition of my API.

What I don’t see much value in is checked exceptions as in Java.  To me, there is absolutely no value in putting in boilerplate code to just rethrow exceptions that I’ve caught just to satisfy a compiler restriction.  But, knowing what exceptions can be thrown is, to me, extremely valuable.

05.14.09

“Big Design” Up Front vs. Big “Design Up Front”

Posted in General development at 1:14 pm by Kyoryu

Design is always a touchy subject.  There are those who believe everything should be designed out beforehand, and those who believe you should design as you go.

I’m pretty firmly in the latter camp, as I’ve never seen the former plan actually work.  But, there’s a few caveats to that.

I don’t believe in the idea of designing every class, method, and interface before you start coding.  Many of those details will become obvious as part of coding, or improvements will be found, and having to add committees and approval processes to making changes (especially if they’re internal only) seems like a really bad idea.  When I talk about big design up front, this is generally what I’m referring to – Big “Design Up Front.”

On the other hand, you need to do some level of design up front.  I think the XP folks call this the system metaphor.  You need to know the big pieces in your design, and what the general flow of the data is.  If it’s a distributed application, who connects to who?

Specific technologies don’t need to be a part of this conversation.  If you know that process A will send data to process B when the data is ready, then how that takes place is mostly an implementation issue.  The important decisions are things like whether A connects to B or vice versa (especially if distributed), and whether A pushes data or B pulls it via polling.  Even in a single process, where are your component boundaries, and are they really boundaries?  What’s your threading model?

These are the kinds of decisions that you have to make early, as they shape the system as a whole.  These are the big pieces of design that need to be hammered out.  I’d call this “Big Design” Up Front, and I’m firmly in favor of it.

04.06.09

Letting go

Posted in General development at 10:12 pm by Kyoryu

One of the hardest things in development is learning to let go.  It’s something most developers fall victim to – you get some idea for a system that will simply fix ALL of the problems, walk the car, and wash the dog!

And then you find some use case that your system doesn’t quite cover.  So you fudge around the use case, or scope it out. And so on and so forth.  And you end up with some nasty piece of code that barely works, is horribly mangled to the point of unmaintainability, and that nobody wants to deal with, ever.

The problem here is letting go.  As developers, we are in the job of creating solutions for problems.  Any piece of code is a solution to a problem.  And most developers are pretty smart, and hate admitting that they’re wrong.

But sometimes we are wrong.  And when our use cases (the problem) start conflicting with our code (the solution), it should be the code that loses.  We should tailor our solutions to the problems we are presented, not the other way around.

When we’re wrong, we have to let go of our wrong solution, and learn to do it quickly and easily and without ego.  And that can be very hard.

01.29.09

Can you know this?

Posted in Uncategorized at 2:10 pm by Kyoryu

One of the questions I like to ask when designing software is simple – “can I know this?”

For instance, when dealing with data across a network, you might decide that you need to know that your local copy of the data is up-to-date before you reduce a particular value by 5.  So the question, in this case, is “can I know that my data is up to date?”

And the answer for that is, generally, no.  To do so requires implementing some sort of locking mechanism, and ask the database folks how easy that is.  Hint:  It’s easy in the trivial case, but quickly becomes difficult.  Another hint:  Two words – ‘deadlock’ and ‘livelock.’

As developers, we tend to believe that we can know everything about a system.  We tend to believe that every problem is, essentially, solvable.  We hate admitting that we can’t get the answer to something.

But sometimes, we just can’t.  Sometimes, the answer to something is dependent on so many other factors that are outside of our control, and that we can’t measure, that there is no way to answer the question with 100% accuracy.

When faced with problems like this, I try to follow up the initial question of “can we know this” with another couple of questisons:  “What do we know,” “what don’t we know,” ”who knows this,” and “what is it we really want to know or do?” This will often suggest a better solution to the problem than one which requires unknowable information.

For instance, in our initial example, we don’t know if our data is up to date, because we don’t know if someone else has updated the data since our last refresh.  And we can’t know that, because it will take an amount of time for any updates to reach us – the best we can ever do is say that we know what the data looked like at some point in the past.

But, what we do know is different.  We do know that we want to decrease the value by 5.  And we know that in most cases, there’s an authoritative data source somewhere.  This suggests a solution – instead of us modifying the data locally, send a request to the source of the data not to set the value to a specific amount (what we believe the current value is, minus 5), but rather to decrease the amount by 5.  Because the data source should always have the current value, it will know what to do to decrease the current value by 5.

If we don’t ask these questions, we can easily start down the road of trying to know the unknowable – being so set that we’re going to have our local machine get the latest value and set the new value to that minus 5 that we do all sorts of crazy research into synchronization and locking mechanisms.  Generally, this results in madness.  Every solution that covers some percentage of cases leaves others broken, and you can end up chasing your tail trying to patch the corner cases, or dealing with issues that are only there in the first place because of how you’re dealing with the problem – for instance, locking issues like I discussed earlier.

« Previous entries Next Page » Next Page »