Written by Mark Steel, CloudTalent CTO – published 22 November 2013
As IT has matured, many organisations – especially regulated ones – developed robust processes for release management in order to gain control over development and safeguard production operations. We have worked with some large companies which term this the “route to live”, going through the traditional stages from development and unit test, through integration and user acceptance before moving into production (and DR hopefully!).
Development teams require multiple environments, either physical or virtual to support their code development lifecycle. Often the collective size of the development environments exceeds that supporting the final production instance.
Each environment has a specific purpose, test objective and associated governance. What drives this purpose and objective identifies the dependencies and infrastructure requirements for each environment. For example, an environment set up for single user functional testing does not need to scale to full production size, whereas a Capacity Test environment does need to be able to reflect production capacity to some extent.
Environments require levels of isolation to manage the risk of test systems leaking data into production transactions and for sensitive data and intellectual property being used in unrestricted development systems. For many types of business or data, these risks can be mission critical.
Developers increasingly are using automation as part the development process through Continuous Build and Integration supporting both Waterfall and Agile methodologies, resulting in industrialised, faster and more consistent delivery of releases. This allows the code and supporting data to move through the environments rapidly.
Pre-configured environments to support Development, Test, Systems Integration Test, User Acceptance Test, Capacity Test, Pre-production/production support and Production itself (including DR) are aligned to the application reference architecture and access to them governed according to SDLC and Release Management policy. Environments are booked on a dedicated or shared base according to the development plan, with secondary and tertiary environments being established on demand where there are parallel streams of activity or a surge of capacity requirements.
All this sounds an ideal partner for “cloud”. On demand services to deliver the peaks and troughs of the development cycle are perfect to minimise costs and setup times. Especially for load testing, where if you need to spin up 1,000 instances, you don’t have to pay up-front and invest capex in equipment – you only pay for what you need, when you need it. Whereas the steady state ‘base load’ is often more effectively supported within a dedicated or ring fenced capacity.
Cloud is only part of a complex end to end process to support effectively development processes and should not be looked at in isolation and without understanding the wider landscape.
And in spite of the benefits of leveraging cloud, there are challenges to this hybrid approach. If you intend to run “production” back on-premise for security / data protection / regulation, then at some point on the “route to live” you need to bring the software back in – probably at system test stage. So, you have a split environment which then adds complexity – for example taking internal test data to a cloud platform (which may well need to be anonymised), where do you put your code repositories – probably on-premise as this will be your intellectual property, how do you co-ordinate your platform builds / standards – between a “cloud” environment that may be vanilla and an on-premise build that may have been customized.
Governance and Planning is Key
It is essential to work with the Development, Enterprise Architecture, Release Management, Security and Compliance functions to design the ‘Route to Live’ solution so that is simple, scalable, cost effective and satisfies the requirements of all the stakeholders.