GNOME Bugzilla – Bug 140676
Track project actuals vs. planned through phases
Last modified: 2013-03-12 16:52:43 UTC
Good project management is all about accurate estimation of tasks. You need to be able to easily create reports or at least identify differences in what project tasks were originally estimated to have started/finished, or cost (which is really a function of work/duration and resources) and when its actually started , finished or costed. The proposal is that we have a Project Phase so we should be able to "commit" a phase and have the current tasks, work, duration, completion and resource assignments (indirectly thus the costs) remembered/tagged with that Phase. Thus we're not constrained to just 1 Baseline (as e.g. MS Project uses this) but many. If someone wants to call a Phase "Baseline" then fine but Planner should not care what the snapshot is called. So I envisage a button of "Commit Phase". Which snapshots the above stuff for this into the XML datafile/database. You should thus be able to "Rollback" to a particular Phase or also report of differences between any two phases. The reports should give you such things like slipping tasks, should have finished tasks and under/over estimated and the like. Now I envisage a report/view which compares any two phases and shows Start, End, Work, Duration, Cost and completion for Task(ID,phase 1) and Task(ID,phase 2). We could even use the current gantt display as well with e.g. 1/2 height gantt bars so you can see the difference visually (if both phases match then the gantt bar looks like a normal large gantt bar else it would look skewed), e.g. ID StartA StartB EndA EndB... Task 1/1/99 1/1/99 2/6/99 2/6/99 [===============] Task 5/5/99 10/6/99 5/6/99 5/6/99 [----==] The reporting will be split into a separate bugzilla which is depenant upon this bugzilla as its quite a bit of work.
Had some thoughts on this and I suggest the following is done to effect this feature. For a project we add a new depth (of node) in the XML file and call it <snapshots> under which there is a <snaphot ...> with attributes of current-project = YES|NO, default-baseline = YES|NO, keyed-on PHASE|DATE, saved-phase = "phase", saved-date "a date" and then under that <snapshot ...> is the current tasks, resources and assignments; same as today only down a depth in the tree. We then offer dialogs to, - Save Snapshot (options of key-on phase or date) - Delete Snapshot, - Make Snapshot current project (my favourite), - Make Snapshot default Baseline. Thus the workflow is that you open Planner as normal and work as normal. Saving a project simple saves the current project as normal with the current tasks, resource and assignments being saved to a ... <snapshots> <snapshot ...> <tasks> </tasks> <resources> <resources> <assignments> </assignments> </snapshot> <snapshots> with the current-snapshot=YES and keyed-on=PHASE (which could be NULL). No change there. Then if you want to you can change the phase and then when you save it and it will save a project snapshot to a different phase and make that the current project. Now the neat stuff- you could open the Make Snapshot Current Project and then pick a previous snapshot and effectively rollback (or roll forward) to that Phase. This is a lot more powerful (and quite simple) a feature. I could thus create two completely different project scenarios e.g. SCENARIO-QUICK, SCENARIO-CHEAP with different resources and tasks and switch between the two to experiment as to which is the "best" project all within one planner file. In the future we would also compare snapshots with the basline (which is just a snapshot in its own right but flagged as being the "basline") or with other snapshots e.g. rank by costs or end dates or similar.
Note: This will get heavily into "Cone of Uncertainty" theory but it's critical to effective software project management, especially when it comes to estimation. From a software estimation standpoint, it seems that there are three possible values an estimator is concerned with during the estimation. The "nominal" (expected) time a task/project will take, a worst-case time, and a best-case time a task may take. These best-case and worst-case values can be determined using historical data (if available) based on work of similar scope with similar resources. For example, if I have ten historic projects that all shared the same work scope and resources as the project I want to plan, and those projects averaged 10 man-months to completion, and at worst took 13 man-months, at best 9 man-months, I know that I should be able to use that data to help me predict future performance. I know that I should never negotiate work rate because it generally backfires on me. On the other hand, I do have three things I can negotiate - work to be done, time till completion, and cost (associated with increasing resources to dedicate on work). I also know that those projects typically cost me $40,000 to complete. My customer wants to know if I can complete another of these for $60,000 in ten months. At first blush, I would say certainly. Realizing, however, that I need more information to make sure that we don't head toward our 13-month worst-case situation, I reply, "What we can do instead is hedge my commitment by responding that we can perform work toward the next phase of development with an anticipated end-price of $60,000 to complete somewhere between 9 and 13 months, anticipating it'll get done in the tenth month. Since we have the requirements completed, the next phase will be completed once the architecture is done for this project. An updated time estimate will be given to you at that time. Does that work for you?" The customer responds, "How much would it cost me to get that project done in eight months?" My historical data shows that isn't possible without reducing the amount of work to be done - we've never been able to work at that rate with the resources we have. We know that training more people at this job would not significantly decrease the time required to get the task done since a large part of it is time-dependent. Therefore, I respond, "We could try to complete that task in eight months, however, we can not guarantee that we can do it in that short a time-frame. Our historical data shows we've been successful in the past on ten similar projects taking anywhere from 9 to 13 months, averaging at 10 months. We don't have any new technology available to us that would allow us to work faster, so committing to 8 months would guarantee failure. On the other hand, if you'd like us to reduce some of the work required to complete this job, we would stand a better chance of succeeding." The customer responds, "Acme Widgets says they can do it in eight months." I respond, "We've seen Acme Widgets do that in the past. Three of the past ten jobs we did on this were former Acme Widgets customers. They're happy that we give solid estimates and stick to them. Each of these customers told us the same thing - 'Acme Widgets grossly under-estimated the project's completion time and ended up giving us substandard quality product over budget and over time.'" The prospective customer responds, "We'll go check out Acme and get back with you." A few days later, the prospective customer comes back and signs a contract for the product. They respond, "Acme gave us numbers so quickly, it was obvious they were trying to pull a rabbit out of their hat. You gave us hard data supporting your estimate. We feel confident that you know what you're doing. Here's the first check." The point of this story is to emphasize how important it is to utilize historical data to give a range of performance possibilites, not a single target that will likely move. It also emphasizes how important it is to be able to estimate using a range of potential completion dates versus a precise (yet likely inaccurate) date. For those that have not yet taken the class, I strongly recommend Construx's Software Estimation class. It talks about methods to improve software estimates using historical data, methods to understand defect rework rates, and the associated costs around a project, both in estimation and performance. For my company, it was money very well spent.
As tracking actuals vs. estimates, earned value is a very effective way to do this. Another user has already submitted a terse request for this (<a href="http://bugzilla.gnome.org/show_bug.cgi?id=316431">316431</a>). MS Project's EV implementation is -terrible-, and I haven't found any other scheduler that attempts it so I've always used a spreadsheet. However in essence it's just adding a few float columns with simple formulas that can be baselined. (Baselining separate subsections is a great idea!). <a href="http://en.wikipedia.org/wiki/Earned_value_management">http://en.wikipedia.org/wiki/Earned_value_management</a> Actually a lot of this stuff would be easy to set up custom if float columns could have some simple spreadsheet-like formulas representing other columns - maybe I'll post another request for that if there isn't one already. P.S. The Construx class was worthwhile! However I'd suggest keeping actual-vs-estimate and best-case/worst-case orthogonal, I think they're from two different use cases that may or may not coincide depending on your process. The cone of uncertainty applies for example to waterfall and inception-phase RUP scheduling, while more agile shops only schedule for one to a very few iterations out; to the latter EV gives enough information to tune your estimates without having to make two estimates for everything.
*** This bug has been marked as a duplicate of bug 316430 ***