Friday, February 27, 2009

From nDoc to Sandcastle

Summary: A new old way to generate documentation (help) files for .NET 2.0+ applications.

I owe Kevin Downs, the author of nDoc, a long delayed apology (nDoc is a popular open-source help file generator designed for .NET 1.x apps). Although I used nDoc extensively (in fact, it was one of my favorite development tools), I did not contribute to the project. The nDoc project died in mid-2006 and it was partially my fault. I'm sorry Kevin. Shame on me: I should've started celebrating Freeware Appreciation Day earlier.

[Not to shift the blame, but the role of Microsoft in the nDoc demise was hardly praise-worthy. I never understood Microsoft's approach to successful applications complementing -- notice, not competing with -- Microsoft technologies and tools: announce an alternative solution, overpromise, and underdeliver. Why wouldn't Microsoft cooperate with the authors of these applications in some way. I totally agree with Scott Hanselman's point:
"It's a shame that Microsoft can't put together an organization like INETA (who already gives small stipends to folks to speak at User Groups) and gave away grants/stipends to the 20 or so .NET Open Source Projects that TRULY make a difference in measurable ways. The whole thing could be managed out of the existing INETA organization and wouldn't cost more than a few hundred grand - the price of maybe 3-4 Microsoft Engineers."
Microsoft should've known better.]
Well, life went on and I needed an nDoc replacement, which would work with .NET 2.0+ apps. So I tried Sandcastle, the Microsoft-sponsored open-source alternative to nDoc, and to my surprise, it worked reasonably well. There was a couple of problems, though.

Problem #1: Sandcastle failed to convert a working nDoc project with error:
"Unable to convert project. Reason: Error CVT0005: Error reading project from 'C:\[...]\[...].ndoc' (last property = DocumentInheritedMembers): String was not recognized as a valid Boolean."
I posted this problem at the CodePlex Issue Tracker not expecting it to be noticed, but received a response on the same day. Since it was easy enough for me to create a new project, I did not use the suggestion, but if you encounter a similar issue, check out the replies to the Error CVT0005 converting nDoc project post.

Problem #2: My first attempt to compile the project failed with error:
"The imported project "C:\SandcastleHelpFileBuilder.targets" was not found."
After some poking around, I figured out that the environment variable SHFBROOT used by the help compiler did not take effect after installation. To fix it, I had to log off and log on to Windows.

After that everything worked well. You can see an example of the Sandcastle project and the generated help file in the Windows service demo project.

If you want to generate documentation from the XML comments in your Visual Studio 2005 (and later) projects, you need to do the following:
  1. Make sure that HTML Help Workshop is installed.
  2. Install the main Sandcastle app.
  3. Install Sandcastle Help File Builder (SHFB) (a GUI app used for building help files).
  4. Create a new project and add your Visual Studio 2005 (or later) project, solution, or output (DLL or EXE) file to the SHFB's documentation sources.
  5. Define other project properties and build the project.
I specified the HTML Help Workshop location in the Sandacastle settings, but this may not have been necessary. I also used Sandcastle to build HTML Help 1.x (.CHM) files, so I'm not sure how it works for HTML Help 2.x (.HxS) files.

If you are not sure about particular settings, check the provided help file or Sandcastle Help File Builder Overview. You can find a number of related projects -- such as DocProject for Sandcastle, Sandcastle Extensions, and others -- at CodePlex, but I haven't tried them. For the latest news, check the Sandcastle blog.

See also:
NDoc to Sandcastle - Generate MSDN Style Documentation and Help Files Based on XML Comments in your .NET Code by David Hayden
Microsoft's Sandcastle emerges as NDoc's creator calls it quits by Brian Eastwood
A Coder’s Guide to Writing API Documentation by Peter Gruenbaum
Taming Sandcastle: A .NET Programmer's Guide to Documenting Your Code by Michael Sorens

Tuesday, February 24, 2009

ASP.NET AJAX and jQuery

Summary: Brief of the talk and presentation given to Sacramento .NET User Group, plus links to shared resources.

Thanks to all who attended my talk at Sacramento .NET User Group on Tuesday. In case you did not attend the event, the discussion focused on developing intranet applications using ASP.NET AJAX and jQuery. We covered the following topics:
  • Pros and cons of different technologies for building rich internet applications (RIA)
  • Introduction to ASP.NET AJAX and jQuery
  • ASP.NET UpdatePanel and UpdateProgress controls
  • Using UpdatePanel with data-bound controls (such as Repeater)
  • Pros and cons of using UpdatePanel with jQuery (and other alternatives)
  • Adding AJAX and jQuery to an ASP.NET applications (walkthrough)
  • Common problems and solutions for applications using ASP.NET AJAX, UpdatePanels, and jQuery
  • Demos and references (tools, tutorials, videos, and more)
I uploaded the presentation to SlideShare:


You can also download the original PowerPoint slides from Box.net.

To run the demo project, download the ZIP file containing the Visual Studio 2008 solution, extract the files, and open the solution (you will be prompted to create a virtual directory for the project).

UPDATE: The following article by Dave Ward explains the idiosyncrasy that I mentioned in the presentation (firing multiple events after partial postback) and offers ways to mitigate the problem (I will update the presentation and sample):Thanks, Dave!

Here are some additional references that did not make into presentation but you may find useful:

Monday, February 23, 2009

Implementing Windows services in Visual Studio 2008

Summary: How to write a better Windows service in Visual Studio 2008.

Last week I discovered a couple of problems related to porting my Windows service demo project from Visual Studio 2003 to Visual Studio 2008. A minor problem was caused by an obsolete method in the sample Windows service. Another problem was quite embarrassing: once started, it was impossible to stop a service from Service Control Manager. To fix this problem (and a couple of other issues related to .NET 2.0-specific functionality), I made the following changes:
  • Swapped the contents of Start and OnStart methods (now the Start method calls the OnStart method).
  • Moved the contents of the Stop method to the OnStop method.
  • Removed the Stop method (it's really not needed).
  • Modified the IsInstalled method.
  • Renamed the TestService source file in the demo project to CustomService (to reflect the name of the class).
If you have been using the My.Utilities library to implement a Windows service and are planning to upgrade it to the Visual Studio 2008 version, please use the updated projects (see the download link below) and keep an eye on the following:
  1. Move your custom startup logic of a WindowService-derived class from the Start method to the OnStart method.
  2. Don't forget to call the base class' OnStart and OnStop methods in the corresponding overriden methods.
  3. Well... that's it.
One other thing that I could've done would be using the nullable DateTime members and variables when checking for uninitialized date and time values (instead of DateTime.MinValue), but since this would not improve functionality, I left the current implementation as is. If you strive for code elegance, feel free to do it yourself, since you have the source.

Here is the Visual Studio 2008 version of the project, which incorporates all reported bug fixes up-to-date:Let me know if you encounter any problems.

See also:
Write a Better Windows Service by Alek Davis

Thursday, February 12, 2009

To TDD, or not to TDD?

Summary: Some thoughts on Test-Driven Development (TDD).

A few weeks ago, Stack Overflow co-founders Joel Spolsky and Jeff Atwood discussed criticized certain aspects of Test-Driven Development (TDD) and SOLID Principles of Object-Oriented Design (OOD) [the latter advocated by Robert C. Martin, AKA Uncle Bob]. Check out the excerpts from the original podcast and the follow-up talk (you may also want to listen to the Scott Hanselman's interview with Uncle Bob, which prompted Joel's rant):As expected, the discussion criticism triggered rather emotional rebuttals from TDD supporters, including Uncle Bob himself; see (and read comments):And then, a counter-rebuttals by Jeff Atwood:Eventually, Joel, Jeff, and Uncle Bob met for a makeup session, which you can find -- along with the summary -- at:Although all participants of the podcast #41 seemed to agree on many issues, I have a feeling that the opposing sides stuck to their points of disagreement: uncle Bob remained convinced that TDD could do only good (the more TDD, the more good), while Jeff and Joel continued emphasizing that other aspects of software development may be more important than TDD.

I happen to lean more towards Jeff and Joel for the following reasons (in addition to the reasons mentioned in The Stack Overflow podcasts).

First, Joel mentioned the issues related to GUI testing, but it's not just the GUI: the database layer poses another challenge. When unit testing database-driven apps, a common approach is to rely on mock objects (in which case, you totally skip database testing) or use test-specific data in a simplified version of the database (here you will have to figure out how to maintain this database and integrate it into your build process). Even with the 100% code coverage of the business layer, if your unit test skip the GUI and database, about half of the application (more or less) will remain untested, and you'll have to address this gap using functional testing, which will overlap with unit tests. So why spend time writing unit test for the code that will be tested functionally anyway?

Second, claims that TDD improves code quality are rarely, if ever, substantiated by data. You may hear statements like:
"Since I (we) started using TDD, my (our) code has become much better."
I'm not claiming that these statements are false, but without data they are just personal opinions. Even assuming that code did in fact improve, could it be due to other factors, such as better programming experience, tools, team composition, processes?

Analyzing effects of TDD on code quality is extremely difficult. One of the better studies on the subject was conducted by Microsoft and IBM (see the 8-minute interview with one of the researchers and the original paper). The study found that TDD improved code quality, but it also increased development time.

There is a trade-off: either use TDD and spend more time writing, maintaining, and running unit tests now or do not use TDD and spend time fixing bugs later. The question is: how much time? If TDD activities take 2 weeks out of an 8-week development cycle (a typical 25% TDD penalty) to prevent defects that could've been found and fixed in 3 days during functional testing, would TDD give you the best return on investment? By the way, it is worth noting that the team with the smallest test coverage (62%) in the study achieved the best results (90% drop of defects), while the team with the highest test coverage (95%) achieved the "worst" (as compared to other teams) results (40% drop of defects; still not bad when compared to the control group). I would also emphasize that the use of TDD in the study was close to ideal: the projects were appropriate for TDD and the teams were not pushed to achieve the maximum code coverage.

The Microsoft-IBM study had a few problems, though. First, the number of participants was rather small. Second, and more important, it tried to compare similar teams and projects, but there were many difference between them, such as team/project sizes, and project durations, which is expected in a study focusing on real life projects. Studies performed in academic settings -- with undergraduate students of approximately the same grade working on identical projects -- attempted to minimize some of the differences and make experiments more controlled. According to an abstract from one of such studies:
"[T]est-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed."
How this sounds depends on how you interpret the study's findings. If you're a TDD supporter, you may join Phil Haack and announce that Research Supports The Effectiveness of TDD. Or, you can look more closely at the results of the study and come to a different conclusion. For example, Jacob Proffitt suggested (see also post comments) that:
"[W]ithout question, testing first leads to having more tests per functional unit. The question is if this is valuable. This study would seem to indicate that this is probably not the case, at least if quality is your intended gain. But then, I'm not that surprised that number of tests doesn't correspond to quality just as I'm not surprised that the number of lines of code doesn't correspond to productivity."
I would also point that the code in the experiments was not written for database-driven apps with web-based GUI, so again it was an ideal usage of TDD. So I would speculate that if either of these studies focused on database-driven apps with web front ends, the results would probably be even less encouraging.

I'm not an opponent of TDD and agree that it can be valuable when applied to certain types of projects. My gripe against TDD is that it is being often sold as yet another silver bullet and forced into adoption indiscriminately, in the projects where TDD offers minimal, if any, benefits.

I also resent the assumption that TDD always leads to good code. Your My project can have 100% code coverage, but it can still suck. It will pass all unit test, alright, but at the same time it can be unreadable, unnecessarily complex, uncommented, resulting in an app with bad GUI and unintuitive behavior, an app which both the users and support teams hate. But don't you dare to criticize my code: since it's 100% unit tested, it's good by definition. Usability testing? Never heard of it. And as far as everything else goes (like comments, better design, etc), do I have time for this when I spend my better hours working on unit test? Something's gotta give.

Finally, what about all those great apps written by developers, who did not use TDD or any other silver-bullet-like methodology? Take Google developers, who apparently do not follow any software development methodology or approach, but somehow manage to write apps that people love. If they can write great apps (which is one of the goals of good code), why would one want to impose TDD on them?

Now, Google developers are good, but what do you do if your developers are bad? [I'm not talking about developers who make occasional, sometimes serious, errors (this happens even to the best of us); I'm talking about developers who consistently write bad code.] Wouldn't TDD help them write better code? I wouldn't expect it, but if you can share a success story, please leave a comment.

If your organization suffers from bad code, consider adopting TDD (when applied correcly, it may help), but more importantly, consider other factors, such as:Once you the adopt common-sense practices, you may not need TDD after all.

See also:
Hanselminutes Podcast 146 - Test Driven Development is Design - The Last Word on TDD
Hanselminutes Podcast 31 - Test Driven Development
TDD Tests are not Unit Tests by Stephen Walther
Test-After Development is not Test-Driven Development by Stephen Walther