5 Technologies Dead by 2013?

I was looking over my list of draft posts (102!), ranging all the way back to 2006, and found this, which was kind of amusing:

Over at The Standard, Don Reisinger predicts the demise of a few consumer technologies:

  • Blu-Ray
  • Desktop PCs
  • Slow mobile networks
  • Local file storage
  • Desktop operating systems

As with all predictions of anything, take this with a grain of salt. We can’t predict things a few months out with much accuracy – how are we going to even begin to predict these kinds of specifics with 5 years of events left to happen?

My thoughts:

I agree with the notion that Blu-Ray will go by the wayside unless something happens to prevent it (and LaserDisk is an apt analogy), but I don’t know that the vendors can really do anything about it. They can’t just arbitrarily lower the price, and the real cost can’t come down without deeper market penetration. There’s a classical chicken-egg problem here. By 2013, portable solid-state disks may be able to compete with DVD in terms of portability, capacity, and price. Solid-state performance would put any optical storage to shame, but it’s still too expensive. We’ll see what happens, but I also do not expect Blu-Ray to survive much longer.

I only partially agree about Desktop PCs. Notebook computers have certainly improved in performance, but desktop computers have also shrunk in size and desktop cases have become more portable and stylish (like Shuttle). Despite the performance gains in notebooks, the advantage in terms of performance to price is still held strongly by desktops, and the DIY crowd (which is a significant part of the PC market) will always be ready to build the next custom PC. It’s also worth further pointing out that the best-performing notebooks don’t even begin to touch the best-performing desktops. It’s not even a real contest. I expect notebooks’ market share to increase, but PCs aren’t going away anytime soon.

I think the surest bet in this list is that slow mobile networks are going to go away. In fact, I think this is kind of an inane prediction, because everyone agrees, and we’re already halfway there anyway.

I disagree wholeheartedly with the last two. Local file storage is definitely not going to go away. I think small portable flash drives will become more and more important, but I also think hard drives are going to continue to be bought and sold in large quantities. Broadband links can only handle so much (think HD movies and games), and people will feel insecure about their personal data if it exists only online (case in point: I’m as much of an online-o-phile as you’ll find, but you can pry my hard drive from my cold, dead fingers). Local backups are vital, regardless of how reliable you think your online storage is. Also, locally installed desktop operating systems are definitely not going to go away. What does he think is going to run all those notebook computers? The Internet? Earth to Don: you can’t install the Internet. You’d need an operating system to get your network interface to work, if for nothing else. Guess which operating system has the greatest device compatibility? Microsoft Windows, kiddies. Even if you get an “internet appliance”, it still has an operating system under the covers; most likely a stripped-down embedded Windows or Linux.

I predict that people will still be reading half-baked predictions and lists in 2013, but those lists won’t be any better than this one.

Too bad the original article can’t be found at that URL anymore.  Looks like he (and I, in places) was mostly wrong, though.  We’re almost to 2013, and all those technologies are still with us.



This is yet another IEqualityComparer<T> generator class.   There are no doubt dozens on the web, but I haven’t yet seen one that fully supports type inference and would thus also be useful for comparing objects of anonymous type.

It’s no more clever than anyone else’s, really.




Developers should know from experience that in projects of any substantial size (pretty much anything big enough to deliver to a customer or client), there will be a need for little utility programs or libraries and classes to handle routine tasks.  When those moments occur, typically what’s called for is something simple and easy to get up and running, because it’s not very much value-add to whatever it is you’re building.

Take parsing command line arguments for example.  It adds very little of value to the end product, but it’s the kind of thing that winds up being necessary.  It’s a cost of doing business with the computer.  A developer has to decide how to handle addressing the need: start from scratch and handle command line arguments manually — writing JIT code that will very likely become hard to update later on, buy a vendor product?

Most developers are inclined to do the former.  I know I am – developing is what we do!  What’s one more minor thing to build in order to address needs?  The problem with this approach is more systemic than just the downside up ending up with old, crufty code that wasn’t designed well enough to be maintainable later on.  Every developer winds up with their own hand-rolled mini-library for handling the mundane – sometimes more than one!  That represents a lot of duplicated effort and inferior quality.

Of course, what I’ve just done is summarize a large portion of the motivation for open-source software .  More specifically though, I recently discovered TestApi.  It’s just quietly hanging out there on codeplex (written by Microsoft devs on the clock), but it addresses a handful of these mundane infrastructure things that gets in the way of us building software with real value.

I just wish I had known about it a long time ago.


Project Management Tools Review

Introduction (or, what is it all about?)

I’ve been trying to figure out a good solution for the project management needs I’m facing.  There are literally hundreds of tools intended to help developers make better software, and dozens of methods and philosophies (where only a few really prevail in the mainstream).  My team leans agile, but we need some better tools to manage ourselves.  I’ve been re-reading a lot of stuff on agile methodology and spending some time thinking through the current state of our discipline (as it applies to our team in particular).

Questions I’ve been pondering are:

  • What are we doing now?  I mean everything, not just what works or what doesn’t.  What is the total sum of how we practice software development?
  • What things should we stop doing?
  • What things should we start doing?

One of the things I came across in my reading is The Future of Agile Software Development.  Dubakov makes some interesting points and I’m not sure I necessarily agree with all of it (there is some good back-and-forth in the comments section), but I think his thinking about the focus points of good software development are reflective of the collective wisdom that inform the current state of the art in software development: do right things, do them right, and do them  fast.  I think this is the goal of modern Agile methodologies, and I’m a bit more optimistic than Dubakov that they can help us get there.  However, his outlook is a good reminder that part of the Agile philosophy is to have the self-awareness to back off and refocus on the core goals (do right things right and fast).

Software Development Focal Point (original content at targetprocess.com)


As I think about that focal point, I also reconsider the various stakeholders and their perspectives.

Developers as craftsmen

We take satisfaction in our work for its own sake because that’s how we’re wired.  We’re makers.  What really makes for a good weekend is to know that the last 5 days were spent making something, and that it was good.  We are made in the image of God by God, who delights in his own handiwork.  He graciously gave us the same attribute, and it manifests itself in a spectacular kaleidoscope of human talents and creative works.  His desire is that we delight in him while experiencing joy in our derivative creations.  This delight is made possible by Jesus Christ, who in his death absorbed the wrath of God over the corruption of the world (and proved it in his resurrection), and by faith we are enabled to delight in him in this way, to his own glory.  By his grace, those who don’t believe these things still take pleasure in their work, even if there’s no particular spiritual meaning  in it for them.  That’s why Monday is my favorite day of the week, not the reverse: I come to work with anticipation of the joy to be had in doing good work.

As developers, we want to make perfect software.  We want it to be bug-free.  We want the code to be elegant, flexible, loosely coupled, well-documented, clever, consistent.  We want it to be optimally efficient in time and memory (“performant” is not a word, and even if it were, it’s a horrible, imprecise word).

We also want to work in smoothly operating teams.  We don’t want there to be discord over minor issues.  We want to be managed well (and my definition of “good management” whether by a team lead like myself or by a project manager or whomever is “makes our jobs easier”).  We want to have a sense of purpose, and we want to feel organized.  We want to receive confirmation when our work is good, and when it’s not, we want to know how, why, and the best way to fix it.  We want to be continuously growing in our skills, both in depth and breadth.


For the purpose of this thought process, I consider “client” to mean “the one paying to have the project done”.  The one who has been persuaded of the strategic value — great or small — of the project.  They are running a business.  Perhaps a large business, or even multiple businesses.  If the company is large enough, it may be multiple businesses under a single corporate umbrella.  Whatever the particular structure of the “business”, someone decided that the project was worth doing.  But why?

They think that ultimately, the sum total value of the project (it’s activities, processes, and deliverables) are worth more to the business than if the project was not executed.  Much of the time, this is a simple calculation: how much money do they have to spend, what’s the business need, and what are the various options for satisfying it?  We would do well to always be mindful that our project wasn’t the only option.

Business owners

I think of clients and “business owners” separately, although in some cases they may be the exact same individual or individuals.  Either way, there is overlap in these two perspectives.  The business owner is something of a corollary to the developers.  They want the project to succeed, but they want it to succeed on their terms.  Business owners are the most impacted by the problem at hand, and have the clearest ideas about what the solution should look like.  They aren’t (usually) particularly technical, and a lot of the time don’t really have any sense of the possibilities, but they know what they want their daily work to look like and produce.  The point of the project is to make manifest the most important demands of that vision within the resource constraints of the client.  That is value.

Software firm management

Lastly, there is the management of the firm responsible for executing the project.  Their goal is mirror to the goal of the client: provide a solution to the problem at an even lower cost than demanded by the clients resource constraints.  Much of the responsibility for planning a successful project falls here.  Up front, the management has to decide whether a potential project is worth doing.  There are all kinds of considerations that go into that.  Part of it is evaluating the client to understand if he has the resources to commit, if he can describe his problem well, or if he even understands his own problem.  Management has to keep a close eye on the cost of the value provided to the client, and forecast somehow whether or not enough value can be provided at a low enough cost to both keep the client happy and make a profit.

What do we need from our tools?

This leaves us with some complex needs and desires.  The stakeholders have competing visions.  Sometimes we even have directly competing visions.  Developers, if asked when something will be done, would often prefer to retort, “when it’s done.”  Obviously, this won’t do.  But we still need to get it done and do our best to satisfy all four of the visions simultaneously.

The tools have a tall order to fill, to be sure.  In order to even begin to understand what we need from the tools, we’ve got to regurgitate these visions into some concrete features (hey….kinda sounds like what we do for a living!).  Let’s just break it down vision-by-vision (and some features will be common to more than one vision).

At a high level, the developer vision is best supported by an agile methodology (no surprise there, because the agile philosophy was formed by developers).  It isn’t perfect, but it’s the best we’ve come up with so far.  So what does that mean for our tool(s)?

  • We need the tool to help us capture the needs of the business owner.  Right?  We want to do the right things.  The business owner is the one that has to tell us what those things are.  The predominant agile methodologies refer to these as “user stories”, or perhaps “features”.
  • We need the tool to help us know what we’re working on right now.  Agile development is iterative.  The short work cycles that actually produce value are called iterations, sprints, or what have you, but we have to be able to break the work into manageable chunks, and put a date on it.
    • Part of the what is knowing which user stories we’re working on right now.
    • Another part of the what is understanding what each user story entails; understanding to some degree the complexity involved.
  • We need to know when we’re supposed to finish what we’re currently doing.  Developers make choices moment-by-moment about what to do or not do, and we need to have a finish line so we can make choices that are in the best interests of the project.  This allows us to make predictions — however imprecise or inaccurate — about what can be done in the given time (otherwise known as “estimates”).
  • We need the abstract artifacts of the project, like user stories, estimates, and deadlines to be linked with the actual work product (i.e., code) so that everyone has accurate insight into the state of the project at any point in time.  This linking needs to be as automated as possible, because the more friction there is in putting data into the tool, the less good data you’re going to get.  You can take that to the bank: if developers get frustrated with the tedium of the tools, you’re going to find that we will consciously or unconsciously avoid working with the tools.  That is bad for everyone.  The lower friction there is (and automation means essentially zero friction), the better.
    • Code really should be linked through the SCM tool, so when changes are committed, developers can enter in annotations that link the code with the project management artifacts as a part of their normal workflow.
    • I’ll get to “release management” in the next bullet, but it’s worth mentioning that unit testing is really important, and the tools should support them in whatever ways possible.  Test results should automatically result in bug reports to the tool, for example.  Test results should be reportable in the tool as a way to demonstrate progress or problem areas.

The client perspective has exactly one question that must be answered: “Am I getting enough for what I’m paying?”  For the client, the whole enchilada is the value proposition.  The tool must expose the value of the project.  To that end:

  • The tool has to allow the client to see what’s been done, and how much time and money was spent to achieve it.
    • The tool has to facilitate the collection of metrics that matter (lines of code are not one of them!).  Hours are a critical metric, but you’ve got to also be able to slice up hours into buckets of time the correspond to the bits of work that were done (completed, or partially completed user stories).
    • The tool has to give you a way to record the complexity of a user story.  This can be a non-hours measurement; in fact, it’s probably best if it’s not.
  • The tool has to give the client some insight (or help us to give the client the required insight) into what’s left, and how much time and money it will take to achieve it.
    • This needs to be done, to the extent possible, using measures that the client will understand.  Hours of labor and a billing rate are a universal metric for every client.  The tool needs to help us help the client make informed decisions about what things will cost in these terms so that various scenarios can be considered.  The best way to do this is to succeed at #1 just above.  A correct interpretation of  past data is the best insight into future value propositions.

Ever project has overruns, mistakes, and failures along the way.  Some worse than others, but no project goes perfectly according to plan (if it did, you wouldn’t be in software, you’d be in manufacturing).  When those things occur, the second bullet above becomes incredibly important.  It’s all that matters to the client.

The business owner needs the tool to help them articulate their vision for the solution.

  • BOs need the tool to help them capture user stories.
  • BOs need user stories to be coherent, and to have context that helps both us and them understand their interrelated-ness
  • BOs (and clients) need the tool to help them priorities the stories so that correct decisions can be made jointly with the client about what features or stories will provide the most value to the business.  This is satisfied by good sprint planning features.  There has to be an easy way to understand quickly what parts of the BOs vision are done, can be done, can be tested, should be done, etc.  This is similar to the second client bullet, but with a different emphasis.  This is also thought of as backlog management.  A sort of story card board is what most agilists would call for here.
  • BOs need a way to report variances from their own understanding of their vision (e.g., bugs), and those variances have to be worked with in a similar way to the rest of the work items.  Their fixes have to be adjudicated according to business value, evaluated for complexity, planned for along with other development work.

The management needs the tool for essentially the same reasons as the client, but perhaps with some additional reporting not visible to the client.

  • What’s the cost basis of the project?
  • Similarly, what’s the expected duration and margin?
  • Who are the best performers?

So there’s the rub(ric).  The bullets above may not seem as concrete as you might like them to be, but I want to leave room for interpretation and vision from the toolmakers (also note that this isn’t really a review of SCM tools or testing tools).  The tools I plan to review:

As I review each tool, I’ll post a link and a date here on this page.  Tools not shown here either I didn’t know about or could easily eliminate from consideration because of bad reviews or obvious lack of important features.


Gist of The Day


ASP.NET CS0016 – Compiler Access Denied

Once in a while, a developer will have one of those unbelievably frustrating problems with his environment or platform that makes him bang his head against the wall for hours.  I’m sure people in other trades have similar experiences somehow, but I don’t know of a good parallel offhand.

I just experienced such a problem with ASP.NET.  I just imported a deployment package of a self-hosted NuGet feed, which is just an empty ASP.NET application built as explained in the NuGet documentation.  Anyway, after importing the deployment package using webdeploy (via the IIS Management Console), I had an application under my default web site called “nuget”.  After that, I kind of expected it to Just Work, at least from the server itself.

 I opened Internet Explorer and navigated to http://localhost/nuget, and got the following nastiness:

Compiler Error Message: CS0016: Could not write to output file ‘c:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\root\62d43c41\27d749ca\App_Code.7lodcznm.dll’

The particular DLL the compiler was trying to write was different than shown, but this is the error.  Well, I found several posts just glibly saying “all you have to do is make sure the ‘Temporary ASP.NET Files’ permissions are set properly”.  It sure sounded easy enough.  Turns out, it was too easy, because it didn’t work.  I ended up giving full control to “Everyone” to kind of prove what was really going on.  After that didn’t work, I ran procmon and filtered carefully to find a non-SUCCESS event on a path including “temp” and by a process called “csc.exe” (the C# compiler).  Sure enough, I found this:

3:01:01.3158251 AM	csc.exe	6428	CreateFile	C:\Windows\Temp\RES3277.tmp	ACCESS DENIED
Desired Access: Generic Read, Disposition: Create, Options: Synchronous IO Non-Alert, Non-Directory File,
Attributes: N, ShareMode: None, AllocationSize: 0

You can see here that the actual error is not what appeared in the browser window, frustratingly. I gave Full Control permissions to the Everyone group on \Windows\Temp, and that seemed to fix it.

Well, that won’t do – it’s not a good idea to do the equivalent of “chmod 777″ on \Windows\Temp, so I needed to give access to just the DefaultAppPool user.  Well, guess what?  When I went through Explorer, I was not able to find the user through the UI to give it permissions.  Ultimately, I figured out that I could do it using icacls:

PS C:\Windows> icacls Temp /t /grant "IIS AppPool\DefaultAppPool:(F)"

After doing the same to the “Temporary ASP.NET Files”  folder, I was done.  Finally.  Now I’ve got a 404 error to fix…


PowerShell+SharePoint Quickie: Export List Data

I’m packaging up some work I did for a client in a set of SharePoint features, and some of the features include list instances. In this case, I needed to package up a list instance into feature xml so it could be deployed from the administrative web interface, but I didn’t have the list data in that format. I didn’t want to enter the data by hand, and there wasn’t an easy, readily-available tool to help extract the data in the format I needed. Enter PowerShell:

$site = New-Object Microsoft.SharePoint.SPSite("http://mysharepointsite")
$site.RootWeb.Lists["MyList"].Items |% { "`n  {0}`n  {1}`n", $_["ID"],$_["Title"] }

This spits out XML formatted for use in a <ListInstance> element.


New Concurrency Support in .NET 4

I’ve been reading about some of the new support for parallelism and concurrency in version 4 of the .NET Framework, and came across a really good paper on parallel design patterns by Steven Toub (most or all of which seems to also be covered in the Parallel Programming with Microsoft .NET book).  There are some really convenient additions to the framework in .NET 4 that help make parallelism easier and less error-prone.  The point of this post is to illuminate a good example of how this is so.

About a year ago, I was working on a class library that was supposed to do some filtering on streams – something along the lines of Boost.Iostreams which gives you the ability to create pluggable, customized “sources” and “sinks” (producers and consumers) and filters set up in a stream pipeline.  One simple application of such a pipeline (and apparently Toub’s favorite) is that of compressing and encrypting some data.

Input is read from the source stream, passed through the filters, and written to the output stream.

I found Toub’s MSDN Magazine article describing how one could parallelize this, and incorporated the sample into my class library.  The basic idea is that you set up a blocking queue of chunks of data, and then use that queue (or queues) to connect the inputs and outputs of the streams and filters, and just let it fly.  The queue serves as the synchronization mechanism, fulfilling the producer role required by the input side of each piece, and fulfilling the consumer role required by the output side of each piece.  Below is the sample code.

read more »


Re: Do I Need To Learn Microsoft Technologies

A long time ago, in a galaxy far, far away (Stackoverflow.com, November 2008), a university junior posted this question:

Most jobs speak of C#, Visual C++, .NET, Java, etc etc Where as I am mainly using Java, C++, Perl, Python and programming to the standard Unix standards, would I be better off ditching Linux and spending my last year of University brushing up on Windows based technologies, languages and API’s, would this increase my chance of getting into the industry?

A fellow named Andy Lester responded with this:

I suspect that your “most jobs” observation is from looking in the wrong places.

Whether or not “most jobs” are using MS technologies, would you WANT to work with MS technologies? If you went and boned up on your .NET and Visual C++ and had to use Windows all day, would that be the kind of job you wanted? If not, then it doesn’t matter if that’s what “most jobs” call for, because those aren’t the jobs for you.

There are not hundreds of jobs out there available for you that are a good match for you, and for which you are a good match. Don’t worry about the broad playing field of the job market, but instead focus on the jobs that DO interest you.

I responded:

I think this is stupendously bad advice. Of course you should bone up on Microsoft technologies. The chances of you making it through a 40-year career in technology without having to work with MS stuff is slim to none. Of course, the real answer is…focus on what you’re learning in school first.

Now recently, I was busy doing something else and I had long forgotten all about this exchange, and Andy has replied back:

Ben’s right, you’re likely to have to use Microsoft technologies, if that’s how you want your career to take you. What I think we’re seeing here is the difference in viewpoints between someone like Ben who seems to think primarily in terms of maximum salary and maximum employability, and someone who thinks about the importance of loving what it is that you do for a job.

He then spends the rest of his post regurgitating his original point: focus on the jobs that interest you.

Firstly, to Andy: I don’t think your point is a bad one, but I still think it was a bad answer to a 21-year-old kid trying to figure out how to make himself marketable.  Of course you should be brilliant at the things that interest you – but it’s also wise to be familiar with tools and techniques that don’t, because they may very well allow you to sustain a situation in which you are allowed to do things that interest you.

My college situation was very similar to the OP’s: I went to a good computer science school, but all the tools totally revolved around Java, C, and Unix.  I never learned any of the Win32 API, and only after my Junior year did I really get a chance to learn anything about Microsoft developer technologies.  That was a huge deal.  It helped broaden my horizons a bit, even if my first job out of school turned out to be a realtime embedded systems job.  But you know what?  After a few years I decided that I wasn’t terribly interested in what I was doing at that job any more, so because I was familiar with tools that I didn’t typically use, I had the flexibility to quit that job and find another one pretty quickly.

Another reason why it would have been smart for the OP to bone up a bit on .NET development or the Windows APIs is simply that he may not really know what he is interested in.  If all you’ve ever done is Java on Linux, how could you?  It could also be that what he’s really interested in is distributed algorithms (or any technology-agnostic application domain), in which case the particular platform or toolchain involved is immaterial.  If that were the case, then it’s a no-brainer: get familiar with new stuff before you graduate, and you’ll be more attractive to potential employers.

So, simply put: my point wasn’t that you shouldn’t pursue the things that interest you, but that it’s wise (particularly for a new grad) to round out your skill set to increase the odds that you’ll get to do the things that interest you.

Just ask a Lisp programmer.


Happy Pi Day