Quantitecture

Performance, Scalability and Availability in IT

Performance and Scalability Survey results

Posted by J on October 23, 2008

The results of the survey are available at the Quantitecture Web Site.

Comments on this blog are closed but you may contact us through the company web site.

Thank you.

About these ads

2 Responses to “Performance and Scalability Survey results”

  1. J Singh said

    A reader comments:
    “I was astounded that the requirements were discovered so late in the project time-line. From this small study it is blatantly apparent that BAs and PMs need to work on the data gathering and communication skills. The irony is that these are readily learned and applied, yet nobody seems to be capable of engaging them”

    This is consistent with my own experience so I wasn’t as surprised. Most of the BAs and PMs are trained in spelling out functional requirements, what the application must do and how it must interact with the user, etc. The training somehow falls short when it comes to the non-functional requirements like what the performance needs to be and how many users need to be supported, etc. This may be because they come from ranks of previous users – they instinctively know what fields belong together on a page, for example. When it comes to performance and scalability, all they can say is “it must be very fast”, and that’s not a well formulated requirement, as you know.

    One of our more successful cases was one where the BAs and PMs specified what the application must do and the Finance and Operations types specified scalability requirements. That doesn’t always happen.

  2. J Singh said

    A reader comments:

    Our experience has been that the vast majority of our application projects are ‘oversized’ with respect to infrastructure capacity. In part, this is because of the risk of ‘undersizing’ (and associated performance-related service delivery failures) and partly it is because application owners prefer ‘consensus of expert opinion’ over predictive analytics as the planning methodology of choice. In addition, capacity planning is often deemed as just another trivial ‘guesswork’ task you delegate to someone who seems sufficiently ‘technical’. Usually a vendor rep. Then, there is no follow-up to see how many hundreds of percent wrong the forecasts were.

    As a consequence, we are rapidly ‘virtualizing’ many of our not-very-busy servers. This effectively buries the scalability problems behind a cloud of multivariate performance complexities wherein no specific project is singled out for an episode of care, or performance triage.

    The adoration of the CIOs for the application functionality is often blind to the equally essential importance of the infrastructure. This is like spending a lot of money on fancy socks but not on shoes.

    The cumulative effects of ‘oversizing’ may offset the value of risk mitigation in such a way that “the view isn’t worth the climb.” A string of ‘successful’ projects that are inefficient, oversized, and portably hardware-unaware, are often unable to exploit efficiencies in the operating environment. This makes for an ocean of operational misery and service delivery failures. In our case, this is a much bigger problem than mitigating the risk of ‘undersizing’ the initial project.

    In the few cases where the computing landscape is prone to capacity issues and scalability limitations, it has been our experience that the initial design was flawed and infrastructure considerations were ignored. Certainly this would be an opportunity for risk mitigation up front instead of postmortem.

    Tim Browning
    Coca-Cola Enterprises
    Technical Architecture
    Enterprise Performance and Capacity Management

    Coca-Cola Enterprises is one of the worlds ‘largest’ commercial IT environments (per IBM corporation).

Sorry, the comment form is closed at this time.

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: