|How to turn a Balrog into a puppy dog.|
Some regulars at pub ‘The Weeping Coder’ include:
- web service functionality not being able to support user interface and flow
- web service down time
- changes to the web service interface
- unscheduled releases with bugs: bug resolution, reporting, retesting
- performance issues
- no data or bad data
|Look at the red, not around the red.|
What this illustrates in alarming bold red type face, is that in a medium sized project with 4 developers working for 40 weeks, the project will be late by 102 days and the client or the supplier will have to find an extra £61,714.00 pounds (other currencies are available) from under the proverbial sofa.
It is also useful to note that this deficit is reflective of the web site development team and doesn’t include the time spent by the web service development team responding to bugs, tickets, issues raised during their development cycle.
So how do you avoid going into the red when delivering web projects and web services and ensure that the symbiosis doesn’t give you a mild psychosis?
Ensuring that web services are designed with the user interface in mind is essential to ensure that the UI is achievable and also can function with optimal performance.
Web service interfaces which are too closely mapped (fine grained) to the UI can end up having a brittle definition i.e. small tweaks to the UI will break the interface and require design and development changes.
Web service interfaces which are too broad (coarse grained) can end up causing the UI to carry a lot of the responsibility for state management, data processing and shipping around a lot of unnecessary data – all of which can lead to performance issues and greater/unnecessary complexity in the UI.
Finding the right balance is the challenge, however collaborative design will always lead to services which can effectively deliver to the UI and deliver a more polished user experience.
Establishing what it means for a web service to be in an acceptable state for it to be consumed helps both development teams understand when what they are doing isn’t working and is wasting time on one or both sides. If the criteria isn’t met then an alternative plan can be made and avoid frustration in the development teams.
Some criteria can include:
- the web service interface is fully defined
- the web service fully supports the user interface
- the web service provides a full set of test/live data to support all system use cases and end to end processes
- the web services are available during development hours
- the web services are performant and meet a defined SLA
Define - and use - Alternatives
Once the criteria has been defined and if they aren’t met then both development teams can explore alternatives to allow for development to continue efficiently for both teams. If development continues with the obstacles listed above, this creates development drag in both teams whilst they get bogged down in finding bugs, identifying bugs, raising and reporting bugs, collaborating about bugs, retesting fixes and so on and so forth.
Some alternatives include:
- Pause development to all teams to catch up
- Ramp down web site developers to ease up on ticket barraging
- Mock web services – either by the web development team or the web service development team. Ideally, the web service development team should do this.
Bear in mind that mocking web services involves writing more code which will need to be accounted for in either team.
Test, Test, Test
A prerequisite to quality assurance is knowing what to test. The best way to achieve this is in the project specification stage by creating a thorough set of use cases or scenarios. These can then be translated into integration tests, QA test scripts and also assist developers with their build.
Integration tests focus more on testing the service as a whole, checking not only the service behaviour but also extensions in the and any other external dependency referenced by the service itself. Integration tests can and should be run automatically by Continuous Integration and prior to new releases being made to the live environment. Again, writing web services integration tests involves writing more code which will need to be accounted for in either team.
The use cases can also be adopted into the QA process and either used for manual tests or even automated tests using frameworks such Selenium to simulate user journeys conducted on the web site.
Developers can work with the web service team to create data sets which support developer integration and testing of the use cases.
Developers are often optimists who don’t like to cause a big fuss. This mix of characteristics can cause a situation where both dev teams are hacking away through the forest undergrowth oblivious to the weather or not they are on course. This is why a constant aerial view of progress is essential. This vantage point is best held by a technical project manager who would be responsible for one, if not all, of the following.
- speaking to/engaging with developers to keep tabs on the mood within the team and perceived levels of productivity
- keeping track on time spent dealing with web service issues (empirical evidence is key when looking at whether teams are being effective)
- re-estimating the project at key milestones along the way
- monitoring the progress and quality of the end product across from UI through to code
- keeping a lid on scope creep
A final note is that in a lot of cases the web application and the web services are developed by different teams in different companies - for example a Digital Agency builds the web site and the client's internal dev team builds the web services. Keeping this co-dependancy healthy is important not only from a technical perspective but ultimately filters up to the business relationship between the supplier and the client. Leave it unattended and the accumulation a lot of small things/bug bears could end up causing the relationship to sour - or fall into an infinite chasm consumed by fire and fury.