The Software Industry is Lost

Our industry is more than 50 years old and still trying to understand some fundamental questions. There have been many analogies and practices to describe how software should be built, yet we still do not have definitive answers to questions like "What defines good software?" and "What defines good code?" The answers people give to these questions are wide ranging, aren’t always compatible, and are by no means unanimous. We have trouble deciding what metrics to use to measure our successes and failures. We, as an industry, are lost in the dark and grasping onto anything that seems to light the way.

"What defines good software?"
Customers say, "Good software works and does what I need." Many programmers would add the stipulation that the code that creates the software must be good. Managers would say that development should not cost too much or take too long. Other requirements do not always get specified until problems arise: good software should be fast and responsive, it should not have security vulnerabilities, it should be easy to use, etc. Additionally, the type of relationship between the customer and the developer can cause these answers to change.

However, looking closely at these answers reveals that they are all tightly coupled. High quality code can deliver on most of these requirements. Developers writing high quality code and using good tools and strong practices are able to create working software that does what the customer needs. These same developers will write more code faster, lowering the cost of projects and satisfying management. Some of these savings can also be transferred to the customer.

"What defines good code?"
I’ve heard different answers to this question, too. Bug-free is a straightforward answer, but not necessarily a good one. Clear, concise, and extensible is another. Some say that code that is unnecessarily extensible is also unnecessarily convoluted. "Code that produces working software" is another definition that is similar to "bug-free", but not quite the same. I’m sure there are many more definitions out there, and I will get around to providing my own. All of them have merits, all of them have drawbacks.

While bug-free code seems desirable, it can be terribly written. Some user interface code I was working on recently illustrates this point well. In this particular interface there were two controls that behaved similarly, so they were both accessing the same source for certain properties like "width". I needed to make a change so that in one of these properties the controls would behave slightly differently. I realized that I could write some simple code that would work: since each control would access this value in turn I could return one value for even calls to the value getter and a different value for odd calls. This is terrible code, even though in this case it would work and be bug-free. It does not behave consistently and could be broken very easily. Good code needs to be robust and consistent. Since development almost always continues after "delivery", discoverability and readability are also important to good code. When looking at a new piece of code, someone familiar with the technology should be able to quickly discern how the code should be used, where it gets its data from, and what it does to that data. Someone who is making changes also needs to have a high amount of confidence that the code works and that it will be immediately obvious if the existing code is broken by the changes; keeping code highly tested facilitates this goal. In a perfect world, all code would be highly tested, easy to read, behave consistently over time, and not break unexpectedly when used in a way that was not initially intended.

"What defines a good user experience?"
High quality code is not enough to make good software. We should view unusable user interfaces as a bug in our software; an unusable interface makes the program just as useless as if it crashes consistently. A good user experience is not something that can be easily defined as it relies on emotion. The customer should feel good about the experience he is having while using the software. It should be easy for him to find the features he needs and to do the work he intends to do. If the software does more than one thing, each feature should be easy and obvious to access. Customers should not get confused by the way the program behaves while they are using it. Developers cannot deliver on these requirements while working in a vacuum.

Developers, designers, if you have them, and customers need to work closely together to ensure that the product being produced is highly useable and takes full advantage of the knowledge and experience of everyone involved. Developers cannot create user interfaces alone because the user experience will always make sense to the one who designed it. It does not matter how much sense the user interface makes to the developer, the customer must constantly give input on what works and what doesn’t. By working closely with the customers, the developers and designers know that they are producing an experience that works for the people who will be using the software. This close relationship has other benefits. For example, a close customer relationship will quickly reveal any misunderstanding that the developer has about the requirements for the product.

"What defines good software development?"
Developers have to deliver both a finished product and the tools and methods used to create that product so that it can continue to be altered and built upon. This sets us apart from most other industries. I don’t know of any other product or craft that provides the tools along with the finished product, or that is expected to continue to improve the finished product after it has been sold. These differences make software development unique and powerful. Each piece of the development process is equally important. The code is just as important as the user interface which is just as important as the finished product.

Software developers deliver more than just products, we deliver services before, during, and after product development. A service is provided every time a feature starts development: the problem being solved is framed in terms of software. If the problem has an existing solution without software, a direct implementation of that solution would not take full advantage of all of the benefits gained by using computers. Good developers create a product that provides more than a simple solution: they add efficiency, simplicity, and sometimes even enjoyment.

At the end of the product development cycle we continue to support our software through patches, updates, or even full upgrades. These are provided as customers find additional features that are needed or when new bugs are revealed. This service is provided faster and with fewer errors when the code and user experiences are both high quality. High quality code makes it easier for new code to be added to an existing code base with fewer regression bugs. High quality user experiences are easier to add new features to and these new features are more noticeable.

In my last post about CodeMash I said that I am not sure that the "Craftsman" or "Journeyman" analogy is how we want to describe ourselves. I’ll admit that statement was not fully educated, so I have been doing some reading and I cannot disagree with the core of the movement: the "Software Craftsman Manifesto." However, "software craftsman" as a name seems like just another light in the darkness to grasp onto.

I don’t think we need an analogy to model ourselves after. While there is a lot we can learn from other fields, to be successful we are going to have to define our own space instead of trying to come up with a way to describe what we do based on another profession or trade. By trying to describe ourselves in terms of preexisting ideas and practices we may neglect part of what it takes to make our industry great. Everything that we create is important to what we do and should be given the attention necessary to be great. The higher quality each part is, the higher quality both our products and services will be.

Posted in process, srt | 1 Comment

CodeMash 2011

It has been about a month since CodeMash has passed – there was good content and good discussions. Exactly what I hope to get out of CodeMash. I continue to think about some of the ideas and discussions; I have yet to decide on the merit and value of some of what was presented.

The first thing I want to do is poke the hornet’s nest. I’m not sure that "Craftsman" or "Journeyman" is the analogy that we want to use to describe ourselves. There are some valuable lessons that can be learned from the life of a tradesman, but I think there are important challenges that we have that cannot be addressed by the analogy. I’ve been trying to apply some of what I’ve learned in one of the "Software Craftsman" sessions and found that it doesn’t really line up with the real world.

I attended some very interesting sessions and learned some cool new stuff. I went to the Scala Koans session  by Dianne Marsh and Dan Hinoosa. Scala has some exceptionally strange syntax but seems like a very cool language. Carl Quinn’s talk about Netflix moving to Amazon Web Services was quite interesting. It was suprised me that he said the cost of the data center versus a cloud was "about on par". I would have expected the data center to be significantly more expensive, but I imagine NetFlix is using a huge number of EC2 machines. He also said that a rewrite of nearly all of their software was necessary because of the the cloud has man different considerations than a comany run data center. Joe Nuxoll’s talk on UX design was incredibly interesting and enlightening. It is both interesting and frustrating that UX design is a lot more about  feeling and emotion than development work is. It is very difficult to pin down what will make the user experience good, but we know many ways in which we can make our code good. While there are some guidelines to help with UX design, it is a much more amorphous process. One of the big messages here was that designers and developers need to work closely together to create an effective product. The other piece of advice that he gave that I thought was really good, but hard to convince management of, is that it is really valuable to create a lot of prototypes and abandon most of them.

Other parts of my CodeMash experience consisted of reinforcing behaviors that I had learned from previous CodeMash conferences or from solving problems throughout the year. While my first reaction to this was disappointment that there wasn’t some big new thing to learn, I ended up comforted in the knowledge that I have been doing things in what is currently considered the "right" way. As you get closer to the leading edge of practices and technology the small details become increasingly important. For example, one of the sessions I went to talked about techniques that I was already using, but using tools that I am not currently using. These tools sound very interesting and I will have to look into them when the opportunity arises.

I’d have to say my biggest disappointment of this year was that the weather prevented some speakers from arriving on time, so I was unable to go to some sessions that I was excited for.

My biggest win came in the poker tournament.

Posted in CodeMash, srt | 1 Comment

Parallel LINQ (PLINQ)

LINQ is getting an upgrade with .Net 4.0. Features are being added that allow us to easily execute queries in parallel. It gives us another reason to drop those odious for and foreach statements and use LINQ. .Net 4.0 introduces the IParallelEnumerable, which will execute queries on it in the most parallel way that it can. In addition, IEnumerable has a new method, AsParallel(), that will return your IEnumerable as an IParallelEnumerable. This allows existing LINQ queries and existing data sources to quickly and easily be transformed into well-oiled query machines.

var numbers = new List<int> {1,2,3,4};
var squares = numbers.AsParallel().Select(n => n*n);

This query will use all available processor cores when it comes time for evaluation. This particular example is not large enough to really justify the use of parallel processing, and there is some potential pain with the use of parallel processing. When this finally evaluates, squares could contain any permutation of {1,4,9,16}, and it won’t be the same order every time it is run. If it is absolutely imperative that your query results are in the the same order as they would be in from a sequential expression, you can use the Ordered() method.

For benchmarking purposes, I created two queries. The first is a standard sequential LINQ query, the second uses ParallelEnumerable to generate the range of numbers and parallelize the query. They both find the count to force execution of the query.

int upper = 1000000;
Query 1:
var numbers = Enumerable.Range(0, upper).Select(n => (double)n).
    Where(n => n % 2 == 0).
    Select(n => n * (n / 2));
var count = numbers.Count();
 
Query 2:
var numbers = ParallelEnumerable.Range(0, upper).Select(n => (double)n).
    Where(n => n % 2 == 0).
    Select(n => n * (n / 2));
var count = numbers.Count();

In my testing (which was admittedly not exhaustive), Query 1 takes on average 77 seconds to run while Query 2 takes on average 56 seconds to run on a 2 core machine. The benefits of running parallel are self-evident, as is the ease.

Danger!

There is a big "gotcha" that can come out of these parallel statements, and since it is so easy to create a parallel expression it is also really easy to do something that won’t work as intended. The rule here is: do not write queries that will make changes to shared state. In general, your queries should not be changing state at all, but if you make changes to shared state you will end up with unintended behavior and possibly thrown exceptions. For example:

private bool testValue;
private void SharedStateTest()
{
    var test = ParallelEnumerable.Range(1, 1001).Select(n => testValue = (n > 995));
    test.Count();
    Console.WriteLine(testValue);
}

If this were written as a sequential query, "True" would always be printed; however, this PLINQ query is non-deterministic and frequently outputs "False". The readability and maintenance of your code will be much better if you never make state changes in your LINQ expressions and you will avoid race conditions like this example.

Posted in .Net, C#, Linq | Comments Off on Parallel LINQ (PLINQ)

WPF LL: TextBoxes that just keep growing

This problem was probably discovered accidentally and shows me once again how important it is to have non-technical and new people testing your programs. The problem encountered was that the TextBoxes on one of our Windows would just keep expanding if you typed past its normal width. This would cause the form to expand, so if you typed enough the form would get wider than the screen. We found the problem not because someone typed a lot in a TextBox, but because the TextBox was for a file path and a very long file path was inserted into it.

The source of this problem was that we had the TextBox in a ScrollViewer with the HorizontalScrollBarVisibility set to "Auto". Changing HorizontalScrollBarVisibility to "Disabled" caused the TextBoxes to stop expanding with typing. The TextBox will still resize as expected (because of the Grid it is in) when the Window resizes.

Posted in srt, WPF | 4 Comments

WPF LL: XML Binding and Data Converters

In the project I’ve been working on, we’re binding a bunch of controls to an XML document. In our bindings, we are extensively using XPath. One of the things that did not occur to me immediately, and seems very obvious now, was that with XML binding you are still binding to objects, specifically XMLNode objects. This means that you can access properties of the XMLNode that you are bound to with Path instead of XPath. For example, you can bind to "Path=InnerText".

One reason this is important is that it makes using data converters much easier. If you bind to a part of an XML source using XPath, the value object in the Convert method and the targetType of the ConvertBack method will sometimes be of type XmlDataCollection (note: this is not universal behavior; I have had this problem when binding to ItemsSource of a ListBox but not Text of a TextBox). However, if you use Path, these will be the same type as the property you bind to. For example, Path=InnerText will be a string. This makes manipulation in the Convert method much simpler and, seems to me, use of the ConvertBack method possible.

Posted in .Net, WPF | Comments Off on WPF LL: XML Binding and Data Converters

WPF Lessons Learned – Use ContextMenu.Items over ContextMenu.ItemsSource

I’ve been working on a WPF project for a client and have learned a lot in the process. I’d like to touch on some of these things. This first post will be about creating a ContextMenu.

I’ve had to create more than one custom ContextMenu for this project. In my first attempt, I created a List<MenuItem> and added all of the MenuItems I wanted in my ContextMenu to this List. Then I created a new ContextMenu and set the ItemsSource to the List. In the MenuItems’ Click event I would need access to the UI element that opened the menu, but when I went to try to get at this element I found that the MenuItem.Parent was null.

When I changed my method to creating a new ContextMenu and then adding all of my MenuItems to it by using ContextMenu.Items.Add(), all of them have the ContextMenu as a Parent. ContextMenu has a PlacementTarget property that is the Object that caused the ContextMenu to open, so my problems were solved.

Posted in .Net, C#, WPF | Comments Off on WPF Lessons Learned – Use ContextMenu.Items over ContextMenu.ItemsSource

Study Group – WPF

For the last two weeks Anne Marsan has led study group. Both of her sessions were about WPF and she led them by writing up instructions on how to produce a project that illustrated what she wanted to teach.

Her first sessions was an introduction to WPF. We wrote a form in XAML and played with a few different controls. The end result was a little application that could draw circles on a canvas and clear the canvas. Part of the fun of our study group is that everyone is learning the whole time, even the leader. We found that the circles were being drawn over other controls because of the ordering of the controls in the XAML, but if we set “ClipToBounds” to true on the canvas it would not draw over other controls.

Yesterday’s session was mostly centered around data binding, but she also went over a little bit of layout. Data binding is made a lot easier in WPF, and you can do it in the XAML or in code. Also, there is a variety of things you can bind to controls; you can bind to a particular properties of an object or to an XPath on an XML document.

WPF has a lot of new ways to layout an application, and I think this is a bit of a double-edged sword. It has a high bar for entry, because trying to figure out how to build your UI can be pretty daunting. However, once you get over the steep learning curve, there are a lot more options at your disposal to create exactly what you want.

This is the last study group of the year, and we’ll be reconvening in February. I’m really looking forward to it!

Posted in .Net, Study Group, WPF | Comments Off on Study Group – WPF

Study Group – F#

Chris Marinos led study group this week on F#. F# is Microsoft’s foray into functional programming; it isn’t strictly functional, but it is clearly designed with that purpose in mind. I thought I had gotten some of experience with functional concepts using LINQ in C# and in some of my Ruby experience, but Chris showed us some really cool ways of doing things.

Chris started by presenting some of the language elements; one of the main points he talked on was side effects. Methods in imperative languages like C# can mutate the input arguments, so that if you pass the same reference into the method multiple times, you won’t get the same result each time. The input data will have changed. In F#, and other functional languages, methods are written without side effects, which makes them concentrate much more on what you are trying to accomplish.

Next, we solved a couple Euler problems in an imperative and functional way to see the differences in style and execution. It was a really interesting exercise.

I’ve been interested in the idea of functional languages for sometime, so going over what we did was really interesting. I think it would be very cool to build an entire application in F#.

Posted in .Net, F#, Study Group | Comments Off on Study Group – F#

Study Group – JEE

Last year we ran a study group in preparation for the .Net certification test. Bill Heitzeg decided to revive the study group with a different focus. Our new purpose is to get a brief overview of a technology each week to stay up to date and have a wide range of exposure. To this end, we’ve been having someone give a presentation in Jam format each week. We’ve seen Silverlight, XSD, Ruby on Rails, and I gave a Jam on Ruby.

This week, Bill Heitzeg gave a Jam on JEE. He mixed jamming with some lecture, which I think turned out to be a good experiment. He explained the basics of JEE including JARs, WARs, EARs, and how JEE is split into the Web and the Enterprise sides. We also went over a few of the many features of Eclipse.

Eclipse is a huge tool, and is a bit daunting if you don’t know where to look for something. It does have a lot of good features built in that you would need a separate add-on for in Visual Studio. It has code generation and refactoring tools built in, along with on the fly code analysis for compile errors. None of this worked perfectly, and the spell check was rather unfortunate (it wanted to change zombies to swamps), but it is definitely a powerful tool.

We built a simple jsp page and went over how to insert code into the html on this page using the <% %> and <%= %> tags. This works just like classic ASP (or maybe classic ASP works just like this). We then built a simple servlet and talked a little bit about the Get action. I had done much of this before, but it was a great refresher.

An hour is not a ton of time, and so we didn’t get to cover too much, but it is enough for exposure. No one will come out of these sessions able to write professional level code, but we do come out with more knowledge of the technology, and a good starting point to continue learning.

Posted in Study Group | Comments Off on Study Group – JEE

Convert to Linq!

While writing my last post, I was looking at the code to see if it was really something I wanted to post for general scrutiny and saw that I could use Linq to do the same thing. Here’s the old code:

bool visible = false;
foreach (var screen in Screen.AllScreens)
{
    if (screen.WorkingArea.Contains(this.DesktopLocation))
    {
        visible = true;
        break;
    }
}
if (!visible)
    this.Location = new Point(100, 100);

I changed it to (query syntax):

var visible = (from screen in Screen.AllScreens
               where screen.WorkingArea.Contains(this.DesktopBounds)
               select screen).Count() > 0;
if (!visible)
    this.Location = new Point(100, 100);

Or the extension method code:

var visible = Screen.AllScreens.Any(screen => 
    screen.WorkingArea.Contains(DesktopBounds));
if (!visible)
    this.Location = new Point(100, 100);

A quick run down of the conversion to the query syntax version:

  • The foreach turns in the the from line
  • The if predicate turns into the where line
  • The select and .Count() combine to form what I really want to know

I prefer the extension method syntax in this case, though. The call to Any() really says exactly what I want in a simple way. Any() will return true as soon as it finds an element that matches the supplied predicate.

Since my last post on Linq, I have used it to make code much more readable and succinct in this fashion. I love it.

Posted in .Net, Linq | Comments Off on Convert to Linq!