Sunday, December 23, 2012

Basement Hackers and Developer Elitism

I have read a lot of blogs complaining about the state of affairs among developers. Jeff Atwood asks, "Why Can't Programmers.. Program?" Jason Gorman goes so far as to say:
Consider that not all developers are equal, and some developers achieve more than others. In reality, 80% of the working code in operation today can probably be attributed to small proportion of us. The rest just get in the way. If anything, if we thinned down the herd to just the stronger programmers, more might get done.
Certainly, skill level varies among developers. But to make ridiculous claims like this is nothing more than a case of developer elitism and almost sounds like some sort of ethnic cleansing campaign.

What's really going on is that, due to the advent of affordable PCs and the internet, it's possible for someone who has never set foot in a university to learn to code from online tutorials. Тhe barrier to entry is lower than ever before. This is not necessarily a bad thing. But I can see why it would cause the older folks to become bitter.

The same thing happened with photography. It used to be that in order to become a photographer, you needed access to a dark room. You needed to know about the different chemicals. You needed to work for some time as an apprentice.

When digital photography and photo editing came along, the barrier to entry became significantly lower. And, of course, some seasoned professionals are now bitter that legions are becoming photographers without ever needing to enter a darkroom.

A traditional education in computer science is extremely helpful. It's important to have a foundation in algorithms and data structures. But there are plenty of people who are successful developers that have obtained their education through different means.

Instead of pointing fingers at newcomers and attempting to "thin down the herd" as Jason puts it, we should welcome to the fold those who truly have a desire to become great programmers.

Now, were I in a position to hire developers, I would definitely test their ability to code. This is one case where it's appropriate to evaluate skills and make a judgement call. But to make blanket statements like those above is just plain ignorant.

Monday, December 3, 2012

Code Ownership

In economics, the Tragedy of the Commons tells of a proverbial pasture that is common to several herders. The observation goes as follows: "...it is in each herder's interest to put the next (and succeeding) cows he acquires onto the land, even if the quality of the common is damaged for all as a result, through overgrazing. The herder receives all of the benefits from an additional cow, while the damage to the common is shared by the entire group. If all herders make this individually rational economic decision, the common will be depleted or even destroyed, to the detriment of all."

I often wonder how this principle applies to a software team. Imagine that you are adding a feature that requires you to add some methods to an existing class that is already pretty big. The simplest thing to do is to just add the methods and move on. It's a tempting choice because it provides immediate results to you, and the costs won't be borne until a later date. At that time they'll be shared across the whole team and not borne by you alone. It takes a disciplined programmer to take the additional steps to pull out the methods into a new class in hopes of reducing future team maintenance.

How do we solve this dilemma? In the case of the pasture, the economist would say to give a private pasture to each herder that he alone is responsible for. Can this same solution be applied to our software teams?

I think a certain sense of code ownership is beneficial. This allows a developer to specialize in a certain part of the system. She will be more likely to keep the code clean if she knows she'll be working in it again in the future. She'll be familiar with the code whenever a feature needs to be added there. The problem with this, though, is that developers typically have big egos. If you are given "ownership" over a section of code, you will be less open to other developers making improvements to your algorithms or design. It discourages team collaboration and dissemination of knowledge.

At the other extreme, you may get exposure to more parts of the system, but you would likely not have any personal investment in any specific section of the code.

I think the solution is some combination of the two. When working on a particular feature or bug fix, focus your efforts on getting to know that area of code. Focus your clean up efforts in that area, knowing that you (or a team member) are likely to encounter the code again in the future. When you are finished, you can move on to another section of code and repeat the process. Over time, each member of the team will ideally have some sense of ownership in each module of the system.

Here are some practices that I think provide the appropriate balance of ownership:

Small Teams


Ideally, teams should consist of 3-4 developers working on a particular, well-defined feature. This means that each developer has a significant investment in the feature, sharing at least a quarter of the responsibility for the quality of the code. Because the team is often working in the same section of code for the duration of the feature, there is a greater incentive to keep the code clean.

Code Reviews


Code reviews, whether done continually through pairing or just before check-in, serve as an extra opportunity to consider the impact of your changes on the code base, including how it will affect future maintenance by the team as a whole.

Ownership of Bugs


Where reasonable, any bugs that come up during development ought to be addressed by the individual that caused them. Attitude is important here. This is not merely an exercise of pointing blame and public humiliation. Rather, that developer is often more familiar with the section of code. This also helps developers to have ownership of the quality of the code that they produce.

The key here is that successful systems are written by developers who take personal responsibility for their code, yet at the same time realize they are part of a team. With the understanding that everyone likes to work in a clean code base, the team, and each individual, must commit to practices that produce quality, maintainable code.

Monday, October 29, 2012

Don't Parse with RegEx

At some point, I think all developers have tried to use regular expressions to parse input. It may work at first, but it quickly becomes unwieldy for all but the most trivial of inputs.

Engineers learned early on that it simplifies things drastically to separate the compiler into separate modules – first find the individual tokens, then use a grammar to see if the tokens are arranged correctly, then assign semantic meaning to the statements, and so forth.

Regular expressions are great for getting the individual tokens. This is the scanning, or lexing phase. With the tokens in hand, we're ready to do the actual parsing. To do this properly, we need to specify a formal grammar. This is typically done in BNF. Grammars can deal with nesting and other constructs that regular expressions don't handle well. For example, imagine trying to match up opening and closing sets of parentheses using regular expressions. This is trivial to specify in a grammar.

Separation of concerns is another advantage of doing it this way. If we truly follow the single responsibility principle, we have no justification for trying to lex and parse at the same time.

Regular expressions are powerful tools by themselves, but don't abuse them – especially if you're dealing with a full-fledged domain-specific language.

Sunday, September 16, 2012

State vs. Interaction Testing

In my post Unit Testing and Assumptions, I described my ideal type of test: "I don't care how [the algorithm does] it, just that the output is what I expect. These tests are short, easy to write, and they make no assumptions about the underlying code." What I didn't realize at the time is that I was simply making the case for state-based testing.

State-based testing focuses on the results of a computation, not on the specific steps of an algorithm. Instead of verifying that add(2, 2) was called on a mock, for example, we simply assert that the result is 4. This makes the tests less brittle and usually easier to write because of less setup. It's also better from a TDD perspective since you don't need to know the details of the code-to-be-implemented in order to write your failing test. For better or worse, these type of tests can quickly turn into "mini-integration" tests since they lack the test isolation that mocks provide.

State-based testing stands in contrast to interaction or behavioral testing. Here the focus is, well, on the interaction of the system under test with a mock object. This can be useful when the method delegates some work to a collaborator. The collaborator object is already tested, so we just need to verify that the delegation takes place with the correct parameters. Indeed, this is essentially the only way to unit test code that makes calls to a web service or database. The downside, of course, is that the test locks down the behavior of the code. If that behavior ever changes, even if the results stay the same, we must update the test accordingly.

I tend to be a proponent of state-based testing whenever possible. I use it in cases where my class under test  has no collaborators, or when the collaborators are lightweight. I like that this type of testing encourages return values over side-effects. I'm also usually more concerned that the results themselves are correct than how the code computed them.

If I have a good reason to isolate the class completely, then I use mocks to verify behavior. This is usually on the seams of the system, such as near the data access layer.

Ultimately, I end up doing whatever feels right at the time given the situation.

Tuesday, August 21, 2012

Learn a New Language

Students of linguistics have probably heard of the Sapir-Whorf Hypothesis. It essentially states that the languages you know directly influence your understanding of the world. In other words, the languages that you know influence how you think about and approach problems.

Though often applied to natural languages, this can certainly be applied to programming languages as well. If we have had exposure to multiple programming language paradigms, such as declarative, functional, and imperative, we will be better at solving problems by choosing the right tool for the job.

Certain language paradigms work better for some certain classes of problems. For example, query languages such as SQL work well as declarative languages because we're more interested in what to find rather than the exact steps of finding it. Declarative languages are also a great way to specify the view for a program. We see this with XML for Android and XAML for Windows WPF applications.

Then we have functional languages, which extol the virtues of pure functions that are free from state and side effects. This makes it easier to reason about and test our programs. It also allows us to take advantage of lazy evaluation and to parallelize our programs quite trivially.

Finally, imperative languages allow us to specify how to accomplish a given task. This is important when performance is a concern or when we need more granularity.

Fortunately for us, many of the newer languages are multi-paradigm, allowing us to use declarative, functional, and imperative ideas in a single language.

So learn a new language, preferably of a paradigm that you're not as familiar with. Even if you rarely utilize the language itself, the concepts gleaned from that language will make you a better programmer and problem-solver.

Thursday, August 16, 2012

Back to the Terminal

One of the big problems of GUI programming is trying to separate business logic from the presentation. Many frameworks and patterns, such as Model-view-controller, exist for just this purpose.

I recently came across a less common approach to this problem. From the StackOverflow Podcast #41:
Atwood: ...the classic UNIX way of developing a GUI was you start with a command-line app, which has a defined set of inputs and outputs, pipes, basically text going in and out of it.
Spolsky: A lot of command-line arguments.
Atwood: And then you put a GUI on top of that, so then you have perfect separation. You don't necessarily have a great app which is the deeper problem, but you have perfect separation, because you can test the command line independently...
Seems like an interesting solution, especially if your target audience is expert users who may prefer a command-line application anyway.

Unknowingly, this is the approach I took with my Batch Photo Editor. From the beginning, it was designed to be a command-line application. This allowed me to focus on the core logic of the application and not have to worry about presentation. Later, I will always have the option of adding a GUI, in which case I would get the business-presentation separation for free.

The downside of this approach is that, if you're not careful, the resulting GUI may be nothing more than a wrapper around your original command-line interface. If your main goal is an extremely user-friendly GUI, perhaps you're better off with UI-First Development. When possible, though, it's refreshing to be able to focus on the actual code that you're writing and not the syntax to wire up a button click handler.

Sunday, August 12, 2012

Refactor at Your Own Risk

Often when we're left to work in someone else's code, we think it's poorly written and seek to improve it or, perhaps worse, rewrite it entirely. Refactor at your own risk, however. Doing so often has unintended consequences or provides little value. Here are my general guidelines:

  • Don't refactor code that won't change


When reworking code, you get the biggest bang for your buck with the code that your programmers work in nearly every day. You know, those places in code that are constantly changing when bugs are discovered or requirements change.

Now contrast this with refactoring that class that hasn't changed in years. Generally we refactor a piece of code so that it will be easier to make changes in the future. If a class is fairly static*, then refactoring provides little to no benefit and only increases the risk of adding new bugs.

  • Refactor in increments


Refactoring works best when done in small increments. Joel Spolsky reminds us to avoid the big rewrite:
We're programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We're not excited by incremental renovation: tinkering, improving, planting flower beds.
There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
It’s harder to read code than to write it.

  • Be wary of refactoring uncovered code


Refactoring is best done when there is a suite of unit tests that assert the correct behavior of the class. In the absence of tests, consider first writing some tests or perhaps avoid refactoring altogether.

  • Use refactoring tools

Where possible, use tools that will perform a variety of common refactors, such as renaming variables, extracting methods, and changing method signatures.

Don't rework code for no reason. Make those changes that will increase the readability and maintainability of your code. As always, the benefits must be weighed against the risks. Remember that, to your users, a working product is king.

*no pun intended

Monday, August 6, 2012

Object-Relational Mapping

I've been thinking a lot lately about the object relational mismatch. It's interesting to note that relational databases and object-oriented programming basically evolved around the same time, yet they're not very compatible with each other.

There's been a lot of good posts on the subject, such as Jeff Atwood's Object-Relational Mapping is the Vietnam of Computer Science. He basically concludes that there are 4 solutions to the problem: Give up relational databases, give up objects, manually map between them, or use an object-relational mapper.

Recently I had concluded that object relational mapping is the way to go. Now I'm not so sure. As I was looking at some example code for a Python ORM called Peewee, I realized something: with ORMs, you are still essentailly defining all of your models as relations. You're just doing it in code rather than in SQL. For example consider this example model definition for a blog from the Peewee documentation:
class Blog(BaseModel):
    name = CharField() # <-- VARCHAR

class Entry(BaseModel):
    headline = CharField()
    content = TextField() # <-- TEXT
    pub_date = DateTimeField() # <-- DATETIME
    blog = ForeignKeyField(Blog) # <-- INTEGER referencing the Blog table
Notice that we have to specify the type of a database column with each of our attributes. Since Python has no explicitly typed variables from which to infer this information, I guess this is permissible. Further notice that our entry has a ForeignKeyField that specifies the relationship between Entry and Blog. What we're left with is little more than a razor-thin layer of abstraction put on top of a relational database.

These objects would look quite a bit different if we weren't so concerned about mapping them. In particular, Blog would likely have a list of Entry, rather than each Entry having a "foreign key" to Blog.

Maybe making our classes look more like relations is the price we pay. It's certainly more appealing than maintaining manual mapping code, and most businesses are wary of using an object store that the developers essentially have exclusive control over.

This is a hard problem. I guess that's why Ted Neward calls it the Vietnam of Computer Science.

Thursday, July 19, 2012

Privates Done Right

When I started learning Python, I quickly learned that there is no way to officially make a member variable private. There is only a convention that any variable starting with a single or double underscore should be treated as an implementation detail that is likely to change. If you try to access it, however, Python will not stop you.

I think that this is the right approach. Sometimes there are legitimate reasons for accessing a private variable. For example, when testing legacy code, or using a third-party API. When I have a legitimate reason to do so, I don't want to have to go through some laborious process to get at the data, such as reflection.

Of course, with new code, the real solution is to write more modular classes with a single responsibility. This way, there are more public entry points to test classes in isolation.

For example, consider this trivial class that performs a calculation and writes the result to the console:

class Logger(object):
    def write(value):
        print value


class DataDisplayer(object):
    def __init__(self, logger):
        self.__logger = logger


    def display(self, value1, value2):
        value = self.__do_calculation(value1, value2)
        self. __logger.write(value)


    def __do_calculation(value1, value2):
        return value1 * value2

In order to test the __do_calculation method, we have a few options. We can call the pseudo-private method directly. Or we can test it through it's public interface, display. This isn't desirable, though, because it has the side effect of writing a value to the screen.


The ideal solution is to the pull the responsibility of __do_calculation into its own class so we can test that in isolation:



class  DataDisplayer (object):
    def __init__(self, logger, calculator):
        self.__logger = logger
        self.__calculator = calculator


    def display(self, value1, value2):
        value = calculator.do_calculation(value1, value2)
        self.__logger.write(value)


class Calculator(object):
    def do_calculation(value1, value2):
        return value1 * value2


Now we have a public interface to test do_calculation in isolation.


In other words, when we write our tests and classes correctly, we shouldn't have to access private variables very often. For those rare cases when we need to, though, it's kind of nice when the language doesn't try to prevent us.

Thursday, June 28, 2012

Unit Testing and Assumptions


When we write a unit test, for better or worse, we're locking down a section of code. This is good when we want to make sure the logic doesn't change. This can be bad, however, when our tests make it difficult to make good changes. Maybe you need to fix a bug, or change the details of an algorithm.

The problem is, sometimes unit tests make too many assumptions about the code under test. We assume that an algorithm will be implemented in a certain way. Perhaps the method contains a lot of side effects.

In my mind, the best-case scenario involves feeding input to a function and getting something in return. I give you a sentence, and you capitalize every word for me. I don't care how you do it, just that the output is what I expect:

def test_upper(self):
  input = "this is a sentence."
  output = upper(input)
  self.assertEqual("This Is A Sentence.", output)


These tests are short, easy to write, and they make no assumptions about the underlying code. It turns out that the less side effects a method has, the easier it is to test.

Cosider the following code:

class box:
  def __init__(self, length, width):
    self.__length = length
    self.__width = width


  def compute_area(self):
    return self.__length * self.__width


The constructor just sets two private fields. This is difficult to test without either accessing the internals of the class or exposing the fields through getters. Even then, we would be testing implementation details that are subject to change. I would argue that we should test the constructor indirectly by testing compute_area as follows:


def test_compute_area(self):
  my_box = box(4, 5)
  assertEqual(20, my_box.compute_area())


What we're really interested in is not that two private fields get set in the constructor, but that the object can compute its area.

Saturday, June 23, 2012

My First Open Source Project

In my first post, I mentioned my first project that I worked on in Python, called BatchEdit. I recently decided to host it on GitHub in order to motivate myself to work on it some more as well as to generate interest in the project.

BatchEdit is a command-line batch photo editor. Basically, you give it an input and output folder and specifiy some adjustments to be done, such as resizing, sharpening, boosting contrast, adding a border, etc. Here is an example of what the command looks like to auto-rotate, increase contrast, convert to grayscale, resize to 720 pixels, sharpen, add a gray border of 5 pixels, and overlay a watermark:

python scripts\BatchEdit.zip --input C:\input --output C:\output
--autorotate --resize 720 --grayscale --contrast 1.2 --sharpen 1.3
--border 5,gray --watermark C:\logo_transparent.png

Here is what the resulting transformation looks like:


In my mind, the most common use case for this program is preparing photos to upload to the web. Often photographers want to auto-rotate, resize, sharpen, and apply some basic adjustments such as boosting the contrast or saturation before uploading. If you have many photos to edit, doing this manually in a program like photoshop is quite tedious.

Under the hood, the code takes advantage of the Python Imaging Library (PIL) for all image manipulation.

Currently there are a few shortcomings of the program. First, it requires that the end user have Python and PIL installed on his system. Second, it does not yet have a GUI on top of it. These limitations would probably keep all but the most expert users from using it.

Batch photo editors have certainly been done before, but I think mine is simpler than most. To see the source code, click here.

Thursday, June 21, 2012

Lean on the Compiler

I've been thinking a lot lately about dynamic typing  about how it supposedly sets you free from the bonds of the compiler and makes you 10x more productive. Some blog posts even go so far as to claim that dynamic typing will replace strong typing due to the increase of unit testing. The advocates rightly point out that just because a program compiles does not mean it is correct. Unit tests are therefore a better safety net. Now, granted the code has full coverage, I agree that we can probably do without compile-time checking.

But is а fully-covered code base a reality? Not likely. Especially when you're working with legacy code.

Today at work I was doing quite a bit of signature changes to my methods  adding and removing parameters to several methods in many different classes. In situations like this, I'm glad to have the compiler to make sure I don't miss anything, even before I have a chance to run what tests I do have. "Lean on the compiler" is the phrase Michael Feathers uses in Working Effectively with Legacy Code. Great read, by the way.

Today I realized that this is one technique that I really miss in dynamically typed languages. Rather than holding me back, in this case the compile-time checking provided confidence to make changes quickly.

Saturday, June 9, 2012

When TDD Breaks Down

There's been a lot of hype recently about Test-Driven Development (TDD). I really like it for the most part. It forces the programmer to think about how the code will be used. This will often result in a more easy-to-use API. It's also a good way to explore all of the edge cases and make sure they are covered with a unit test.

I've found, though, that sometimes having to write the test before the code is a little too restrictive. Sometimes you have no idea what the resulting code will look like or how exactly it will achieve your goal. In this case, I am an advocate of what I like to call exploratory coding. Code first until you have an idea of what your code needs to do. This may be accompanied by some ad-hoc testing. Then, when you have the functionality nailed down, cover it with tests. Of course, this only works if you're careful to keep your code modular and testable. If not, you may need to refactor first or rewrite the code altogether following TDD.

We often joke at work that the proper way to do TDD is to write the code first and comment it out, then get your failing test and fix it by uncommenting the code. This just goes to show how counter-intuitive test-first can be at times.

I often hear people argue that if you write the code first, there's a chance you'll get lazy and won't get around to testing it afterwards. I disagree. This just requires a little discipline, which I would argue is essential to being a good developer in the first place.

Essentially, the principle behind TDD is the importance of modular, well-tested code. Indeed, I would say that the rise of unit testing is one of the biggest developments to improve the overall quality of code in recent times.

Sunday, June 3, 2012

The Virtues of Dynamic Typing

Last semester, I took a class in Python. Coming from a C++/C# background, many things were new to me. I liked not having to use curly braces. I also liked not having to declare the type of my variables.

A few things caught me off guard, however. I found it strange that Python did not allow me to declare my class variables as private. I was also disappointed that I couldn't explicitly define an interface.

What I initially considered limitations of the language, I eventually regarded as liberating. I felt empowered to be a responsible programmer without being babysat by the compiler. I knew what types I was using, and I knew which variables I intended to be private. When I wanted to use an interface, I simply used the same method signature in several different classes. Indeed, my productivity was increased significantly as a result of this flexibility.

At the end of the class we had a chance to build a final project. In just three weeks, I was able to build a command-line batch photo editor. Working on it was a joy, and the resulting code was very clean and terse.

My only frustration that did not have a satisfactory conclusion was the problem of deployment. Either the end user needs to have Python installed on his system, or you must somehow package up the interpreter with the application. Neither solution is ideal.

Overall, I concluded that in many cases, static typing is an unnecessary burden to programmers. Dynamic typing is not without its problems, but Python will definitely hold a prominent place in my quiver from now on.