Reports/week 4
Report for Mar 3 - 7
Summary:
This week illuminated a core failing in the development process at The RedX, being that detailed specification and design documents are not used over the course of creating a product. Because of this, the progress made is pitifully minor compared to the effort expended this week.
Details:
This week showed me that there are many concepts being taught in our class that are not being applied here at The RedX. I believe that this is part of the reason that we have made almost a habit of being late with every project. The biggest things that I noticed this week were a lack of a centralized design doc/spec, the lack of a clear testing plan, and the lack of risk management.
First, the design doc. The storm project is complicated. At the bare minimum there are three different servers interfacing with each-other, dozens if not hundreds of concurrent users, an asynchronous, event driven, multi-threaded project core and a detailed web-ui that has to respond in real time to events. It is true that my project manager has built this product before. However, it is also true that the product he built, the one we are currently trying to replace, was far more of a proof-of-concept than a fully-featured, fault-tolerant product. In our efforts to make Storm the product that it should be the choice was made to start from scratch and try to do it right. I believe that was the right choice. The prototype helped us to learn, but it was a messy, unwieldy piece of code. So starting from scratch was the right idea. However, in starting from scratch and applying the previously learned lessons, we drastically changed the design of Storm. For example, since Storm is an auto-dialing service, there has to be logic somewhere that controls the dialing. How to move from one phone number to another, when to connect two phones, or when to go to voice-mail are all examples of what the dialing logic needs to do. This logic is the heart of the system and is an asynchronous tangle. In the old version of Storm, that dialing logic was implemented on the client side in Javascript. That was a mistake. It belonged server-side, as close to the server actually doing the dialing as possible. So that is what we chose to do, and so far, the choices have all been the right ones. The error we made came when the project manager decided that since he had already built the system once, there was no value in going into a detailed design for the new C++ module that would be running the dialing logic. I watched this week as he scrapped three different designs, each one showing flaws only after hours or days of work. These flaws could have been caught if the team had sat down and decided to spec out exactly what each class and interface needed to be doing. Perhaps not every dead end could have been avoided. Many dead ends are revealed only because that path has a problem you never foresaw in the planning. That is part of development. But I believe that much of the pain experienced this week could have been avoided by taking a few days at the beginning of the project to try and figure out exactly what the implementation of Storm needed to look like. I also believe that it would have helped not only this week, but every week leading up to this one. Because we don't have a super-detailed spec, and we have no design doc, there is no traceability in Storm. I have also been involved in dozens of design meetings to try and design our way out of the holes we've fallen into, which together have taken probably as much or more time than if we had done it all up front. There is great resistance to this idea, as my project manager believes that any upfront design is little better than guesswork. His attitude leads me to believe that he always believes that there are catastrophic things waiting to happen that there is no way we could have foreseen. My bug I was facing last week is exactly one of those. But I believe that every other issue we have faced as a team which has slowed us down could have been planned for an overcome before it even arose, such as the proper design of the C++ module.
The lack of explicit documentation has lead to a great deal of ambiguity in our testing. At a fundamental level, it is fairly easy to see if the program works, either the number I type in starts ringing or it doesn't. However, as we all know, all bugs lay in wait in the dark corners of the code, where nobody thinks to test because no one would ever be dumb enough to do something that would make us go there. It is at the borders, the edge cases that things break. The lack of documentation means that there is no clear line about when the code should throw an exception, or maybe try to recover, or just succeed. The tests then don't serve the purposes that they should. They confirm that the the program works the way we think it should, and that is it. They should be confirming that it conforms exactly to the specifications. It is impossible to appropriately test the edge cases, because we don't know how it should behave in those edge cases. So either we sit down and have a meeting about it, thereby pulling everyone out of flow and wasting as much time getting people to the meeting as it does to answer the question, or we just don't write the test case. A large process flaw I have noticed is that our product is supposed to go through QA before it is released. But if there is no manual, how do they know what to test? They don't know the code, they don't know what we have implemented. It would be far more effective if we were able to say, "This weeks code adds features 2.7, 2.8 and all of 3". Then they would know exactly what to look for. Instead we have gotten bug reports on features that are not yet implemented and had implemented features hardly tested at all.
The aforementioned problems come together to mean that we are not managing risk like we could. Instead of using Pert or Gantt charts, we use this fancy, expensive program called Pivotal Tracker which is essentially little less than a big checklist. It completely hides the critical path. It does not show how parallelizable any parts of the project are. This means that we are unable to tell management what a delay really means. There are a few tasks I have been late on, some of which have made the entire project schedule shift back, others had no effect whatsoever. This is of course just a small part of managing risk, but it is one that I have noticed we could do much better at. I think there are some good things that we are doing too. I still believe that the process we started a few weeks back of assigning multiple tasks to an individual to help him stay productive all the time instead of bogging down on a single problem was a great way to minimize risk. All code must be unit-tested and peer reviewed, which again lowers the risk of expensive maintenance or the code regressing because broken stuff was committed. There is more that should be done though. Not having the detailed design from the beginning means we don't really know which areas could be terribly difficult. If there is a part of the project that was hard to design, or that we had realized we didn't know enough about to design it yet, that would have helped us to know that that piece was one we really needed to focus on and get locked down as soon as possible. Another, small part of risk management, that we have never talked about but that is just now occurring to me, is managing the risk of burnout. Impossible goals, frequent setbacks and hard technical challenges seem to have the same effect on every member of the Storm team. Each person wants to pull 100 hour weeks and be the hero and save the milestone. Three months of that makes for an entire team of angry zombies. A greater, more detailed up-front analysis of the project could have let us know that the deadline to deliver the entirety of Storm by the end of the quarter was overly ambitious. With that information, we could have talked to management in the very beginning and cut features or extended the deadline to the point where the risk of loss of moral had at least been diminished.