This blog has, IMO, some great resources. Unfortunately, some of those resources are becoming less relevant. I'm still blogging, learning tech and helping others...please find me at my new home on http://www.jameschambers.com/.

Thursday, October 28, 2010

Visual Studio 2010 Wishlist – Better Collapsing Region Support

Here it all is, in one picture: everything that could be better with Visual Studio 2010’s collapsing helpers.

image

1) XML Comment Block Collapsing

Visual Studio has had great support for XML commenting for some time, specifically with the trip slash to quickly document existing functions.  Which is why this sucks so bad.

Summary?  Seriously?  I don’t need to know what the first tag is in my comment block.  The IDE already knows to treat these blocks differently (it allows you to collapse them), so why not show me something useful?  Even the first 40 chars followed by a … would be great.  Keep it on a line, that’s why I collapsed it, but let me see what it’s about.

End of rant.

2) and 4) Contiguous Comment Lines

Here I have a series of comments one after each.  I’d like to be able to collapse them.

3) and 5) Language Constructs

This should be a no-brainer.  Ifs, for eaches, trys, fors…they should all be collapsible.

Further, how about supporting SHIFT + CTRL + ‘+’ and ‘-’ to handle this one.  What’s that? You’re in a for each?  No worries, let me collapse that for you quickly while you figure out context, then you can easily expand back out!

That would be sweet.

6) Arbitrary Selection

When I margin-select, or multi-line select any block of code, I would like to see a collapse marker appear in the margin.

imageBut it’s all good…

The truth is that I am so completely fortunate to have the means to work on a big fat 24” monitor and I am not challenged with space. 

Just about, but not quite.

I can still use CTRL+Mouse Wheel to zoom in/out and I do have 3 screens in front of me (one 1900x1200 and two 1280x1024) for real estate.  When things get really tight, vertically, I can always resort to using auto-hide on my error list.  Pshshh!  I don’t have any errors anyways!

I can work through the lack of support for these collapsing features, but I don’t envy the fellah who’s got to work on a smaller screen.  In spite of the level of awesomeness in Visual Studio 2010, I love how many good things must be coming down the road.

Friday, October 8, 2010

Unravelling the Data – Ill-Formatted Data

 

Read the background to this post.

When Bad Data Is Required

Fixing the data in the legacy system was not something that could be done in place.  What I would refer to as ‘bad’ data was in some cases the glue that held reports together and made things like billing work.

This was one of the first things I had to resolve.  My original approach was that I was going to try to “self-heal” the data through a combination of regular expressions, string replacements and templated hints and helpers.  With the sheer number of discrepancies, this approach was DOA, and manual intervention was required.

A Side Order of Data Fixin’

I took a snapshot of the database and added additional columns to the tables where combined data was present.  To understand ‘combined data’ a little background will help.

image At various points in the application lifecycle the management had decided that they weren’t going to use the fields for their original purpose and started using them for a new one.  In other scenarios, they decided to use the fields in one context for some customers and in a different context for other customers. 

Depending on the customer and how long it took employees to shake old habits, these fields were used in differing ways over extended periods of time.  Furthermore, even if there was a clear drawing point, none of the records in the database have a last modified date or any kind of audit log that reveals when a customer record is modified (in a meaningful way).

Thus, my side order approach faced another problem: there was no clear cut of the data and the existing applications needed to keep running.  A snapshot of data today wouldn’t help in the transition 6 months down the road.

The Birth of the Transition Platform

The solution was to create an ASP.NET MVC application, hosted only on the intranet, that used much of my original approaches to identifying bad data, but left the “healing” to an end user.

Where possible, I used jQuery to look up context-based fixes through controller actions and mashed-up some save functionality by POSTing to the legacy ASP pages of the original application.  Where it wasn’t possible (where functionality would be affected by changes to data) I created proxy tables to house the ‘corrected’ version of the data and wrote some monitors to periodically check to make sure that data was up-to-date.

I grouped functionality of the fixes into distinct controllers.  For instance, anything related to a billing address was in the BillingAddressController with actions to support the corrections required for errors related to that piece. The models focused on model-view versions of the “bad data” and I used repositories to not only connect to the legacy system, but also to maintain a worklog of outstanding and completed tasks.

This worked great, as I was also able to say, at any given point, where we were at percentage-wise for correcting any set of data.

This process continues on today, and time is devoted to cleaning data each week.  All three of the legacy systems continue to get (mis)used, though accuracy has been greatly improved.  As users became aware of expected formats they also became more conscience of how they were entering the data into the older software.

This first win made the next steps more plausible.

Next up: Data that Could be Auto-Corrected

Thursday, October 7, 2010

Where are we Taking this Thing?

In a way, I have been a linguist and advocate of literacy for most of my life, but perhaps not as you would expect. 

I started copying programs from books and magazines when I was 4 years old.  I started writing my own code when I was about 7.  As I gained a greater knowledge of computer programming my concern also grew about how others would learn.  As technology has advanced and the topics in computer science become "solved", the underlaying complexities have also grown and I worry that we are raising a generation that will not be equipped to deal with the emerging languages.

imageIn fifth grade I wrote a text-based choose your own adventure game on the Commodore 64 and brought my creation to school.  My classmates could put their own names in and play along, choosing their way through my somewhat limited and unoriginal stories. I stood back in the computer lab and watched as they played; they were facinated!  I remember my teacher, Mr. Pugh, came over and said, "You know, James, most of them won't understand what you've done."

When we wanted to see graphics on the screen as a kid, I set array values mapped to registers in the video memory that would turn a pixel on and off on the screen.  We programmed the hardware. We “mapped bits” and created “bit maps”.

Today, with a single line of code, we can bring a myriad of pixels to life with vibrant color and movement and full-screen HD video streaming across a network we don't even own.  What you tell the computer to do is no longer what the computer is doing: it's doing much, much more and it doesn't require of you a greater understanding.

Here's an excerpt from Douglas Rushkoff's new book, Program or be Programmed:

When human beings acquired language, we learned not just how to listen but how to speak. When we gained literacy, we learned not just how to read but how to write. And as we move into an increasingly digital reality, we must learn not just how to use programs but how to make them.

In the emerging, highly programmed landscape ahead, you will either create the software or you will be the software. It’s really that simple: Program, or be programmed. Choose the former, and you gain access to the control panel of civilization. Choose the latter, and it could be the last real choice you get to make.

I don't necessarily buy into the doomsday duality scenario of zombies and computer programmers, but there is some truth in there and I wonder what it holds as outcomes for humanity and culture.

Wednesday, October 6, 2010

Unravelling the Data – Understanding the Starting Point

So, we’re now into October and the year is passing quickly.  The major function of my employment – helping the organization flip to a new operations platform – is nearing completion.  As well, I have just wrapped up an 11 week series of articles with a publisher that I am very excited to share (but have to wait a little still!).  The articles explain my rarity here on my blog, but I am glad to have some time to invest in this again…especially with the release of the MVC 3 framework!

What I Actually Do

image My current work – at it’s core – is a data conversion project, but don’t let the simplicity of that synopsis fool you. 

The reality is, when it comes to inventory, billing, service and customer management, that when you flip the company’s software the data conversion is the easy part. 

Often, it’s the process changes that can cripple the adoption of a new platform, especially when you’re moving from custom developed software and moving to an off-the-shelf product.  Change can be very hard for some users.

I have the good (ha!) fortune here of working through both data and process transformations.

The Transition Platform

Being the only developer on the project – and in the organization – I do have the pleasure of being able to pick whatever tools I want to work and the backing of a company that pays for those tools for me.

imageIf you’ve ever hit my blog you know that I am a huge fan of the .NET Framework and the ecosystem that you get to be a part of when you develop software within it.  The tools have come so far in the last decade that you would not even believe that the same company made them.

Great progress has been made – albeit at times slower than other vendors in certain areas.  But with Visual Studio 2010 (which I switched to halfway through the project) and the MVC Framework I was literally laughing at how trivial some of the tasks were rendered.

The vertical nature of a development environment and a deployment environment that are designed to work together make things even that much more straightforward.

It is important to note that my development over the last year was not the end to the means.  What I produced was simply a staging platform that would facilitate a nearly-live transition to the target billing and customer management system.  My job, done right, would leave no end-user software in use.

Onto The Problem with the Data

Not all data is a nightmare.  A well-normalized database with referential integrity, proper field-level validation and the like will go a long way to helping you establish a plan of action when trying to make the conversion happen.  Distinct stored procedures coupled with single-purpose, highly-reusable code make for easily comprehended intention.

Sadly, I was not working with any of these.  The reality is that I was faced with the following problems opportunities that I had to develop for:

  • There are over 650,000 records in 400 tables. This is not a problem in and of itself, and it’s not even a large amount of data compared to projects I’ve worked on with 10’s of millions of rows.  It likely wouldn’t be a problem for anyone, unless they had to go through it line by line…
  • I had to go through it line by line.  Sort of.  There were several key problems with the data that required careful analysis to get through like dual-purpose fields, fields that were re-purposed after 4 years of use, null values where keys are expected, orphaned records. 
  • The data conversion couldn’t happen – or begin to happen – until some of the critical issues were resolved.  This meant developing solutions that could identify potentially bad data and providing a way for a user to resolve it.  It also meant waiting for human resources that had the time to do so.
  • The legacy software drove the business processes, then the software was shaped around the business processes that were derived from the software.  This feedback loop lead to non-standard practices and processes that don’t match up with software in the industry (but have otherwise served the company well).
  • Key constraints weren’t enforced, and there were no indexes.  Key names were not consistent.  There were no relationships defined.  Some “relationships” were inferred by breaking apart data and building “keys” on the fly by combining text from different parts of different records (inventory was tied to a customer only by combining data from the customer, the installation work order and properties of the installer, for example).
  • The application was developed in classic ASP and the logic for dealing with the data was stored across hundreds of individual files.  Understanding a seemingly simple procedure was undoubtedly wrapped up in hundreds of lines of script, sometimes in as many as a dozen different files.

Mashing Up Data

The items listed above were all significant challenges in-and-of themselves, but  the reality is that these are just a sample of the problems opportunities, from just one system.  I had three to work with, and all were joined by a single, imageASP script-generated key.  If you just threw up in your mouth a little bit, I forgive you.  I did the same when I saw that, too.

Worse, the key was stored as editable text in all three systems.  Because of a lack of role- and row-level security, someone working their second day at the company could change the key, switch keys between users.  It was a little scary.

And I can’t imagine a manager in the world who likes to hear, “Hey, I’m going to just take three unrelated sets of data, mash them up and let you run your business on it, mmmkay?”  Obviously a better approach was needed.

Now, Here’s How I Got Through

It took over a year, but I am now close enough to the finish line that I could throw a chicken over it.  In post-mortem fashion, I’ll talk about each of the challenges I had to work through, and how I tackled them over the next few posts.

Stay tuned for: Ill-Formatted Data