Pattern and anti-pattern CI/CD

vPierre Automation, Cloud Computing, DevOps, DevSecOps Leave a Comment

1.      Pattern and anti-pattern CI/CD

The purpose of the deployment pipeline is threefold:

  • Visibility: All aspects of the delivery system – building, deploying, testing, and releasing – are visible to all team members promoting collaboration.
  • Feedback: Team members learn of problems as soon as they occur so that issues are fixed as soon as possible.
  • Continually Deploy: Through a fully automated process, you can deploy and release any version of the software to any environment.

In the Deployment Pipeline diagram above, all of the patterns are shown in context. There are some patterns that span multiple stages of the pipeline, so I chose the stage where it’s most predominately used.

1.1       Configuration management – pattern and anti-pattern

1.1.1      Configurable Third-Party Software

  • Pattern Evaluate and use third-party software that can be easily configured, deployed, and automated.
  • Anti-patterns Procuring software that cannot be externally configured. Software without an API or commandline interface that forces teams to use the GUI only

1.1.2      Configuration Catalog

  • Pattern Maintain a catalog of all options for each application, how to change these options and storage locations for each application. Automatically create this catalog as part of the build process.
  • Anti-patterns Configuration options are not documented. The catalog of applications and other assets is “tribal knowledge”.

1.1.3      Mainline

  • Pattern Minimize merging and keep the number of active code lines manageable by developing on a mainline.
  • Anti-patterns Multiple branches per project.

1.1.4      Merge Daily

  • Pattern Changes committed to the mainline are applied to each branch on at least a daily basis.
  • Anti-patterns Merging every iteration once a week or less often than once a day.

1.1.5      Protected Configuration

  • Pattern Store configuration information in secure remotely accessible locations such as a database, directory, or registry.
  • Anti-patterns Open text passwords and/or single machine or share.

1.1.6      Repository

  • Pattern All source files – executable code, configuration, host environment, and data – are committed to a versioncontrol repository.
  • Anti-patterns Some files are checked in, others, such as environment configuration or data changes, are not. Binaries – that can be recreated through the build and deployment process – are checked in.

1.1.7      Short-Lived Branches

  • Pattern Branches must be short lived – ideally less than a few days and never more than an iteration.
  • Anti-patterns Branches that last more than an iteration. Branches by product feature that live past a release.

1.1.8      Single Command Environment

  • Pattern Check out the project’s version-control repository and run a single command to build and deploy the application to any accessible environment, including the local development.
  • Anti-patterns Forcing the developer to define and configure environment variables. Making the developer install numerous tools in order for the build/deployment to work.

1.1.9      Single Path to Production

  • Pattern Configuration management of the entire system – source, configuration, environment and data. Any change can be tied back to a single revision in the version-control system.
  • Anti-patterns Parts of system are not versioned. Inability to get back to a previously configured software system.

1.2       CI continuous integration- pattern and anti-pattern

1.2.1      Build Threshold

  • Pattern Fail a build when a project rule is violated – such as architectural breaches, slow tests, and coding standard violations.
  • Anti-patterns Manual code reviews. Learning of code quality issues later in the development cycle.

1.2.2      Commit Often

  • Pattern Each team member checks in regularly to trunk – at least once a day but preferably after each task to trigger the CI system.
  • Anti-patterns Source files are committed less frequently than daily due to the number of changes from the developer.

1.2.3      Continuous Feedback

  • Pattern Send automated feedback from CI system to all Cross-Functional Team members.
  • Anti-patterns Notifications are not sent; notifications are ignored; CI system spams everyone with information they cannot use.

1.2.4      Continuous Integration

  • Pattern Building and testing software with every change committed to a project’s version control repository.
  • Anti-patterns Scheduled builds, nightly builds, building periodically, building exclusively on developer’s machines, not building at all.

1.2.5      Stop the Line

  • Pattern Fix software delivery errors as soon as they occur; stop the line. No one checks in on a broken build as the fix becomes the highest priority.
  • Anti-patterns Builds stay broken for long periods of time, thus preventing developers from checking out functioning code.

1.2.6      Independent Build

  • Pattern Write build scripts that are decoupled from IDEs. These build scripts are executed by a CI system so that software is built at every change.
  • Anti-patterns Automated build relies on IDE settings. Builds are unable to be run from the command line.

1.2.7      Visible Dashboards

  • Pattern Provide large visible displays that aggregate information from your delivery system to provide highquality feedback to the Cross-Functional Team in real time.
  • Anti-patterns Email-only alerts or not publicizing the feedback to the entire team.

1.3       Testing- pattern and anti-pattern

1.3.1      Automate Tests

  • Pattern Automate the verification and validation of software to include unit, component, capacity, functional, and deployment tests
  • Anti-patterns Manual testing of units, components, deployment, and other types of tests.
  • Unit- Automating tests without any dependencies.
  • Component- Automating tests with dependencies to other components and heavyweight dependencies such as the database or file system.
  • Deployment- Automating tests to verify the deployment and configuration were successful. Sometimes referred to as a “smoke tests”.
  • Functional- Automating tests to verify the behavior of the software from a user’s perspective.
  • Capacity- Automating load and performance testing in nearproduction conditions.

1.3.2      Isolate Test Data

  • Pattern Use transactions for database-dependent tests (e.g., component tests) and roll back the transaction when done. Use a small subset of data to effectively test behavior.
  • Anti-patterns Using a copy of production data for Commit Stage tests. Running tests against a shared database.

1.3.3      Parallel Tests

  • Pattern Run multiple tests in parallel across hardware instances to decrease the time in running tests.
  • Anti-patterns Running tests on one machine or instance. Running dependent tests that cannot be run in parallel.

1.3.4      Stub Systems

  • Pattern Use stubs to simulate external systems to reduce deployment complexity.
  • Anti-patterns Manually installing and configuring interdependent systems for Commit Stage build and deployment.

1.3.5      End-To-End Testing Considered Harmful

Continuous Delivery is a set of holistic principles and practices to reduce time to market, and it is predicated upon rapid and reliable test feedback. Continuous Delivery mandates any change to code, configuration, data, or infrastructure must pass a series of automated and exploratory tests in a Deployment Pipeline to evaluate production readiness, so test execution times must be low and test results must be deterministic if an organisation is to achieve shorter lead times.

For example, consider a Company Accounts service in which year end payments are submitted to a downstream Payments service.

The behaviour of the Company Accounts service could be checked at build time by the following types of automated test:

  • Unit testscheck intent against implementation by verifying a discrete unit of code
  • Acceptance testscheck implementation against requirements by verifying a functional slice of the system
  • End-to-end testscheck implementation against requirements by verifying a functional slice of the system, including unowned dependent services

While unit tests and acceptance tests vary in terms of purpose and scope, acceptance tests and end-to-end tests vary solely in scope. Acceptance tests exclude unowned dependent services, so an acceptance test of a Company Accounts user journey would use a System Under Test comprised of the latest Company Accounts code and a Payments Stub.

End-to-end tests include unowned dependent services, so an end-to-end test of a Company Accounts user journey would use a System Under Test comprised of the latest Company Accounts code and a running version of Payments.

If a testing strategy is to be compatible with Continuous Delivery it must have an appropriate ratio of unit tests, acceptance tests, and end-to-end tests that balances the need for information discovery against the need for fast, deterministic feedback. If testing does not yield new information then defects will go undetected, but if testing takes too long delivery will be slow and opportunity costs will be incurred.

1.3.5.1     The folly of End-To-End Testing

“Any advantage you gain by talking to the real system is overwhelmed by the need to stamp out non-determinism” Martin Fowler

End-To-End Testing is a testing practice in which a large number of automated end-to-end tests and manual regression tests are used at build time with a small number of automated unit and acceptance tests. The End-To-End Testing test ratio can be visualised as a Test Ice Cream Cone.

End-To-End Testing often seems attractive due to the perceived benefits of an end-to-end test:

  1. An end-to-end test maximises its System Under Test, suggesting a high degree of test coverage
  2. An end-to-end test uses the system itself as a test client, suggesting a low investment in test infrastructure

Given the above it is perhaps understandable why so many organisations adopt End-To-End Testing – as observed by Don Reinertsen, “this combination of low investment and high validity creates the illusion that system tests are more economical“. However, the End-To-End Testing value proposition is fatally flawed as both assumptions are incorrect:

  1. The idea that testing a whole system will simultaneously test its constituent parts is a Decomposition FallacyChecking implementation against requirements is not the same as checking intent against implementation, which means an end-to-end test will check the interactions between code pathways but not the behaviours within those pathways
  2. The idea that testing a whole system will be cheaper than testing its constituent parts is a Cheap Investment FallacyTest execution time and non-determinism are directly proportional to System Under Test scope, which means an end-to-end test will be slow and prone to non-determinism

Martin Fowler has warned before that “non-deterministic tests can completely destroy the value of an automated regression suite“, and Stephen Covey’s Circles of Control, Influence, and Concern highlights how the multiple actors in an end-to-end test make non-determinism difficult to identify and resolve. If different teams in the same Companies R Us organisation owned the Company Accounts and Payments services the Company Accounts team would control its own service in an end-to-end test, but would only be able to influence the second-party Payments service.

The lead time to improve an end-to-end test depends on where the change is located in the System Under Test, so the Company Accounts team could analyse and implement a change in the Company Accounts service in a relatively short lead time. However, the lead time for a change to the Payments service would be constrained by the extent to which the Company Accounts team could persuade the Payments team to take action.

Alternatively, if a separate Payments R Us organisation owned the Payments service it would be a third-party service and merely a concern of the Company Accounts team.

In this situation a change to the Payments service would take much longer as the Company Accounts team would have zero control or influence over Payments R Us. Furthermore, the Payments service could be arbitrarily updated with little or no warning, which would increase non-determinism in Company Accounts end-to-end tests and make it impossible to establish a predictable test baseline.

A reliance upon End-To-End Testing is often a symptom of long-term underinvestment producing a fragile system that is resistant to change, has long lead times, and optimised for Mean Time Between Failures instead of Mean Time To Repair. Customer experience and operational performance cannot be accurately predicted in a fragile system due to variations caused by external circumstances, and focussing on failure probability instead of failure cost creates an exposure to extremely low probability, extremely high cost events known as Black Swans such as Knights Capital losing $440 million in 45 minutes. For example, if the Payments data centre suffered a catastrophic outage then all customer payments made by the Company Accounts service would fail.

An unavailable Payments service would leave customers of the Company Accounts service with their money locked up in in-flight payments, and a slow restoration of service would encourage dissatisfied customers to take their business elsewhere. If any in-flight payments were lost and it became public knowledge it could trigger an enormous loss of customer confidence.

End-To-End Testing is an uncomprehensive, high cost testing strategy. An end-to-end test will not check behaviours, will take time to execute, and will intermittently fail, so a test suite largely composed of end-to-end tests will result in poor test coverage, slow execution times, and non-deterministic results. Defects will go undetected, feedback will be slow and unreliable, maintenance costs will escalate, and as a result testers will be forced to rely on their own manual end-to-end regression tests. End-To-End Testing cannot produce short lead times, and it is utterly incompatible with Continuous Delivery.

1.3.5.2     The values of Continuous Testing

“Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place” Dr. W. Edwards Deming

Continuous Delivery advocates Continuous Testing – a testing strategy in which a large number of automated unit and acceptance tests are complemented by a small number of automated end-to-end tests and focussed exploratory testing. The Continuous Testing test ratio can be visualised as a Test Pyramid, which might be considered the antithesis of the Test Ice Cream Cone.

Continuous Testing is aligned with Test-Driven Development and Acceptance Test Driven Development, and by advocating cross-functional testing as part of a shared commitment to quality it embodies the Continuous Delivery principle of Build Quality In. However, Continuous Testing can seem daunting due to the perceived drawbacks of unit tests and acceptance tests:

  1. A unit test or acceptance test minimises its System Under Test, suggesting a low degree of test coverage
  2. A unit test or acceptance test uses its own test client, suggesting a high investment in test infrastructure

While the End-To-End Testing value proposition is invalidated by incorrect assumptions of high test coverage and low maintenance costs, the inverse is true of Continuous Testing – its value proposition is validated by incorrect assumptions of low test coverage and high maintenance costs:

  1. A unit test will check intent against implementation and an acceptance test will check implementation against requirements, which means both the behaviour of a code pathway and its interactions with other pathways can be checked
  2. A unit test will restrict its System Under Test scope to a single pathway and an acceptance test will restrict itself to a single service, which means both can have the shortest possible execution time and deterministic results

A non-deterministic acceptance test can be resolved in a much shorter period of time than an end-to-end test as the System Under Test has a single owner. If Companies R Us owned the Company Accounts service and Payments R Us owned the Payments service a Company Accounts acceptance test would only use services controlled by the Company Accounts team.

If the Company Accounts team attempted to identify and resolve non-determinism in an acceptance test they would be able to make the necessary changes in a short period of time. There would also be no danger of unexpected changes to the Payments service impeding an acceptance test of the latest Company Accounts code, which would allow a predictable test baseline to be established.

End-to-end tests are a part of Continuous Testing, not least because the idea that testing the constituent parts of a system will simultaneously test the whole system is a Composition Fallacy. A small number of automated end-to-end tests should be used to validate core user journeys, but not at build time when unowned dependent services are unreliable and unrepresentative. The end-to-end tests should be used for release time smoke testing and runtime production monitoring, with synthetic transactions used to simulate user activity. This approach will increase confidence in production releases and should be combined with real-time monitoring of business and operational metrics to accelerate feedback loops and understand user behaviours.

In Continuous Delivery there is a recognition that optimising for Mean Time To Repair is more valuable than optimising for Mean Time Between Failures as it enables an organisation to minimise the impact of production defects, and it is more easily achievable. Defect cost can be controlled as Little’s Law guarantees smaller production releases will shorten lead times to defect resolution, and Continuous Testing provides the necessary infrastructure to shrink feedback loops for smaller releases. The combination of Continuous Testing and Continuous Delivery practices such as Blue Green Releases and Canary Releases empower an organisation to create a robust system capable of neutralising unanticipated events, and advanced practices such as Dark Launching and Chaos Engineering can lead to antifragile systems that seek to benefit from Black Swans. For example, if Chaos Engineering surfaced concerns about the Payments service the Company Accounts team might Dark Launch its Payments Stub into production and use it in the unlikely event of a Payments data centre outage.

While the Payments data centre was offline the Company Accounts service would gracefully degrade to collecting customer payments in the Payments Stub until the Payments service was operational again. Customers would be unaffected by the production incident, and if competitors to the Company Accounts service were also dependent on the same third-party Payments service that would constitute a strategic advantage in the marketplace. Redundant operational capabilities might seem wasteful, but Continuous Testing promotes operational excellence and as Nassim Nicholas Taleb has remarked “something unusual happens – usually“.

Continuous Testing can be a comprehensive and low cost testing strategy. According to Dave Farley and Jez Humble “building quality in means writing automated tests at multiple levels“, and a test suite largely comprised of unit and acceptance tests will contain meticulously tested scenarios with a high degree of test coverage, low execution times, and predictable test results. This means end-to-end tests can be reserved for smoke testing and production monitoring, and testers can be freed up from manual regression testing for higher value activities such as exploratory testing. This will result in fewer production defects, fast and reliable feedback, shorter lead times to market, and opportunities for revenue growth.

1.3.5.3     From End-To-End Testing to Continuous Testing

“Push tests as low as they can go for the highest return in investment and quickest feedback” Janet Gregory and Lisa Crispin

Moving from End-To-End Testing to Continuous Testing is a long-term investment, and should be based on the notion that an end-to-end test can be pushed down the Test Pyramid by decoupling its concerns as follows:

  • Connectivity – can services connect to one another
  • Conversation – can services talk with one another
  • Conduct – can services behave with one another

Assume the Company Accounts service depends on a Pay endpoint on the Payments service, which accepts a company id and payment amount before returning a confirmation code and days until payment. The Company Accounts service sends the id and amount request fields and silently depends on the coderesponse field.

The connection between the services could be unit tested using Test Doubles, which would allow the Company Accounts service to test its reaction to different Payments behaviours. Company Accounts unit tests would replace the Payments connector with a Mock or Stub connector to ensure scenarios such as an unexpected Pay timeout were appropriately handled.

The conversation between the services could be unit tested using Consumer Driven Contracts, which would enable the Company Accounts service to have its interactions continually verified by the Payments service. The Payments service would issue a Provider Contract describing its Pay API at build time, the Company Accounts service would return a Consumer Contract describing its usage, and the Payments service would create a Consumer Driven Contract to be checked during every build.

With the Company Accounts service not using the days response field it would be excluded from the Consumer Contract and Consumer Driven Contract, so a build of the Payments service that removed days or added a new comments response field would be successful. If the code response field was removed the Consumer Driven Contract would fail, and the Payments team would have to collaborate with the Company Accounts team on a different approach.

The conduct of the services could be unit tested using API Examples, which would permit the Company Accounts service to check for behavioural changes in new releases of the Payments service. Each release of the Payments service would be accompanied by a sibling artifact containing example API requests and responses for the Pay endpoint, which would be plugged into Company Accounts unit tests to act as representative test data and warn of behavioural changes.

If a new version of the Payments service changed the format of the code response field from alphanumeric to numeric it would cause the Company Accounts service to fail at build time, indicating a behavioural change within the Payments service and prompting a conversation between the teams.

1.3.5.4     Conclusion

“Not only won’t system testing catch all the bugs, but it will take longer and cost more – more than you save by skipping effective acceptance testing” – Jerry Weinberg

End-To-End Testing seems attractive to organisations due to its promise of high test coverage and low maintenance costs, but the extensive use of automated end-to-end tests and manual regression tests can only produce a fragile system with slow, unreliable test feedback that inflates lead times and is incompatible with Continuous Delivery. Continuous Testing requires an upfront and ongoing investment in test automation, but a comprehensive suite of automated unit tests and acceptance tests will ensure fast, deterministic test feedback that reduces production defects, shortens lead times, and encourages the Continuous Delivery of robust or antifragile systems.

http://www.alwaysagileconsulting.com/articles/end-to-end-testing-considered-harmful/

1.4       Deployment pipeline- pattern and anti-pattern

1.4.1      Deployment Pipeline

  • Pattern A deployment pipeline is an automated implementation of your application’s build, deploy, test, and release process.
  • Anti-patterns Deployments require human intervention (other than approval or clicking a button). Deployments are not production ready.

1.4.2      Value-Stream Map

  • Pattern Create a map illustrating the process from check in to the version-control system to the software release to identify process bottlenecks.
  • Anti-patterns Separately defined processes and views of the checkin to release process.

1.5       Build and deployment scripting- pattern and anti-pattern

1.5.1      Deployment Pipeline

  • Pattern A deployment pipeline is an automated implementation of your application’s build, deploy, test, and release process.
  • Anti-patterns Deployments require human intervention (other than approval or clicking a button). Deployments are not production ready.

1.5.2      Value-Stream Map

  • Pattern Create a map illustrating the process from check in to the version-control system to the software release to identify process bottlenecks.
  • Anti-patterns Separately defined processes and views of the checkin to release process.

1.5.3      Fail Fast

  • Pattern Fail the build as soon as possible. Design scripts so that processes that commonly fail run first. These processes should be run as part of the Commit Stage.
  • Anti-patterns Common build mistakes are not uncovered until late in the deployment process.

1.5.4      Fast Builds

  • Pattern The Commit Build provides feedback on common build problems as quickly as possible – usually in under 10 minutes.
  • Anti-patterns Throwing everything into the commit stage process, such as running every type of automated static analysis tool or running load tests such that feedback is delayed.

1.5.5      Scripted Deployment

  • Pattern All deployment processes are written in a script, checked in to the version-control system, and run as part of the Single Delivery System.
  • Anti-patterns Deployment documentation is used instead of automation. Manual deployments or partially manual deployments. Using GUI to perform a deployment.

1.5.6      Unified Deployment

  • Pattern The same deployment script is used for each deployment. The Protected Configuration – per environment – is variable but managed.
  • Anti-patterns Different deployment script for each target environment or even for a specific machine. Manual configuration after deployment for each target environment.

1.6       Deploying and releasing applications- pattern and anti-pattern

1.6.1      Binary Integrity

  • Pattern Build your binaries once, while deploying the binaries to multiple target environments, as necessary.
  • Anti-patterns Software is built in every stage of the deployment pipeline. Canary Release Pattern Release software to production for a small subset of users (e.g. , 10%) to get feedback prior to a complete rollout. Anti-patterns Software is released to all users at once.

1.6.2      Blue-Green Deployments

  • Pattern Deploy software to a non-production environment (call it blue) while production continues to run. Once it’s deployed and “warmed up”, switch production (green) to non-production and blue to green simultaneously.
  • Anti-patterns Production is taken down while the new release is applied to production instance(s).

1.6.3      Dark Launching

  • Pattern Launch a new application or features when it affects the least amount of users.
  • Anti-patterns Software is deployed regardless of number of active users.

1.6.4      Rollback Release

  • Pattern Provide an automated single command rollback of changes after an unsuccessful deployment.
  • Anti-patterns Manually undoing changes applied in a recent deployment. Shutting down production instances while changes are undone.

1.6.5      Self-Service Deployment

  • Pattern Any Cross-Functional Team member selects the version and environment to deploy the latest working software.
  • Anti-patterns Deployments released to team are at specified intervals by the “Build Team”. Testing can only be performed in a shared state without isolation from others.

1.7       Infrastructure and environments- pattern and anti-pattern

1.7.1      Automate Provisioning

  • Pattern Automate the process of configuring your environment to include networks, external services, and infrastructure.
  • Anti-patterns Configured instances are “works of art” requiring team members to perform partially or fully manual steps to provision them.

1.7.2      Behavior-Driven Monitoring

  • Pattern Automate tests to verify the behavior of the infrastructure. Continually run these tests to provide near real-time alerting.
  • Anti-patterns No real-time alerting or monitoring. System configuration is written without tests.

1.7.3      Immune System

Pattern Deploy software one instance at a time while conducting Behavior-Driven Monitoring. If an error is detected during the incremental deployment, a Rollback Release is initiated to revert changes. Anti-patterns Non-incremental deployments without monitoring.

1.7.4      Lockdown Environments

  • Pattern Lock down shared environments from unauthorized external and internal usage, including operations staff. All changes are versioned and applied through automation.
  • Anti-patterns The “Wild West”: any authorized user can access shared environments and apply manual configuration changes, putting the environment in an unknown state leading to deployment errors.

1.7.5      Production-Like Environments

  • Pattern Target environments are as similar to production as possible.
  • Anti-patterns Environments are “production like” only weeks or days before a release. Environments are manually configured and controlled.

1.7.6      Transient Environments

  • Pattern Utilizing the Automate Provisioning, Scripted Deployment and Scripted Database patterns, any environment should be capable of terminating and launching at will.
  • Anti-patterns Environments are fixed to “DEV, QA” or other predetermined environments

1.8       Data- pattern and anti-pattern

1.8.1      Database Sandbox

  • Pattern Create a lightweight version of your database – using the Isolate Test Data pattern. Each developer uses this lightweight DML to populate his local database sandboxes to expedite test execution.
  • Anti-patterns Shared database. Developers and testers are unable to make data changes without it potentially adversely affecting other team members immediately.

1.8.2      Decouple Database

  • Pattern Ensure your application is backward and forward compatible with your database so you can deploy each independently
  • Anti-patterns Application code data are not capable of being deployed separately.

1.8.3      Database Upgrade

  • Pattern Use scripts to apply incremental changes in each target environment to a database schema and data.
  • Anti-patterns Manually applying database and data changes in each target environment.

1.8.4      Scripted Database

  • Pattern Script all database actions as part of the build process.
  • Anti-patterns Using data export/import to apply data changes. Manually applying schema and data changes to the database.

1.9       Incremental development- pattern and anti-pattern

1.9.1      Branch by Abstraction

  • Pattern Instead of using version-control branches, create an abstraction layer that handles both an old and new implementation. Remove the old implementation.
  • Anti-patterns Branching using the version-control system leading to branch proliferation and difficult merging. Feature branching.

1.9.2      Toggle Features

  • Pattern Deploy new features or services to production but limit access dynamically for testing purposes.
  • Anti-patterns Waiting until a feature is fully complete before committing the source code.

1.10    Collaboration- pattern and anti-pattern

1.10.1   Delivery Retrospective

  • Pattern For each iteration, hold a retrospective meeting where everybody on the Cross-Functional Team discusses how to improve the delivery process for the next iteration.
  • Anti-patterns Waiting until an error occurs during a deployment for Dev and Ops to collaborate. Having Dev and Ops work separately.

1.10.2   Cross-Functional Teams

  • Pattern Everybody is responsible for the delivery process. Any person on the Cross-Functional Team can modify any part of the delivery system.
  • Anti-patterns Siloed teams: Development, Testing, and Operations have their own scripts and processes and are not part of the same team. Amazon.com has an interesting take on this approach. They call it “You build it, you run it”. Developers take the software they’ve written all the way to production.

1.10.3   Root-Cause Analysis

  • Pattern Learn the root cause of a delivery problem by asking “why” of each answer and symptom until discovering the root cause.
  • Anti-patterns Accepting the symptom as the root cause of the problem.

1.11    Artefacts- pattern and anti-pattern

1.11.1   Reuse artifacts

  • Reuse-Friendly Culture
    An organization that recognizes that it is honorable to reuse the work of others when appropriate, disgraceful to develop something that could have been found elsewhere, and virtuous to consider the needs of others by generalizing appropriately. Organizations with a Reuse-Friendly Culture recognize that reuse is a cross-project, infrastructural effort based on cooperation and teamwork.
  • Robust Artifact
    An item that is well-documented, built to meet general needs instead of project-specific needs, throroughly tested, and has several examples to show how to work with it. Items with these qualities are much more likely to be reused than items without them. A Robust Artifact is an item that is easy to understand and work with.
  • Self-Motivated Generalization
    Developers often generalize an item (that is, make it potentially reusable by others) out of pride in their work. This is very common within the open source community, in which developers create source code that they share universally. Peer recognition for quality of work is often more important to a developer than a monetary reward.
  • Senior Reuse Engineer
    The developer of a reusable artifact must have the skills necessary to develop a Robust Artifactand be able to support its use by other developers. This skillset is typically found in senior developers who have a wide range of development and maintenance experience, who have a software engineering background, and who have successfully reused the work of others on actual projects.
  • Reuseless Artifact
    An artifact believed to be reusable, often because it is Declared Reusable, which is not reused by anyone.
    Someone other than the original developer must review a Reuseless Artifact to determine whether or not anyone might be interested in it. If so, the artifact must be reworked to become a Robust Artifact.
  • Repository-Driven Reuse
    The belief that creating a reuse repository, a mechanism that stores and automates management of potentially reusable items, will drive reuse within your organization. Often a result of Reuse Comes Free.
    Many organizations achieve significant reuse simply by maintaining a directory of reusable artifacts, and people often find these artifacts through word of mouth and not a fancy search mechanism. This makes it doubtful that a reuse repository is any sort of prerequisite for success. You need a Reuse-Friendly Cultureto achieve high-levels of reuse, not a nifty new tool. Yes, a repository does provide a search facility and configuration management, but these features only support reuse, they don’t guarantee it.
  • NIH Syndrome Excuse
    Developers of a Reuseless Artifactclaim that others don’t reuse it because those developers didn’t create it themselves—the „not invented here“ (NIH) syndrome.
    Professional developers constantly seek to reuse the work of others because it frees them up to work on the domain-specific portions of their own applications. People will readily reuse Robust Artifacts, not items that are only Declared Reusable.
  • Declared Reusable (also known as „If You Build It, They Will Come“)
    The belief that something is reusable simply because you state that it is so. Often a result ofReuse Comes Free.
    Although this approach does engender some reuse, the typical result is a collection of Reuseless Artifacts. Reuse is about quality, not quantity. You know that something is reusable only after it has been reused; reusability is in the eye of the beholder, not the eye of the creator.
  • Reward-Driven Reuse
    The belief that all your organization needs is a reward program to achieve high levels of reuse.
    Most bonuses work out to less than minimum wage, when calculated on an hourly basis, so it’s doubtful that people do it for the money. Self-Motivated Generalizationand reuse of Robust Artifactsare far more common in practice. Trust your coworkers. They’ll do the right thing when given the opportunity.
  • Production Before Consumption
    The belief that you can start by building reusable artifacts.
    The reality is that you need to invest heavily to make reuse a reality. Dedicate resources to develop Robust Artifacts. Grow and support a Reuse-Friendly Culture. Put configuration management and change control processes in place. Reuse driven from the top down requires infrastructure management processes such as organization-level architectural modeling.
  • Code Reuse Only
    The belief that you can only reuse code.
    Of the many items that you can reuse, such as components and documentation templates, code reuse is typically the least productive. You can reuse a wide variety of artifacts throughout the entire software life cycle.
  • Project-Driven Reuse
    The limiting of generalization decisions to the scope of a single project.
    Yes, a single project may be able to obtain some level of reuse on its own, but reuse is a multi-project effort. Generalizing a date routine, or a use-case template, or a user-interface design standards document offers little value to the project. The benefits of generalization efforts are often realized by the projects that come later. You need the infrastructure-oriented viewpoint of a Reuse-Friendly Culture.

1.11.2   Promoting artifacts between repositories is a poor man’s metadata

Note: this antipattern used to be known as Mutable Binary Location

A Continuous Delivery pipeline is an automated representation of the value stream of an organisation, and rules are often codified in a pipeline to reflect the real-world journey of a product increment. This means artifact status as well as artifact content must be tracked as an artifact progresses towards production.

One way of implementing this requirement is to establish multiple artifact repositories, and promote artifacts through those repositories as they successfully pass different pipeline stages. As an artifact enters a new repository it becomes accessible to later stages of the pipeline and inaccessible to earlier stages.

For example, consider an organisation with a single QA environment and multiple repositories used to house in-progress artifacts. When an artifact is committed and undergoes automated testing it resides within the development repository.

When that artifact passes automated testing it is signed off for QA, which will trigger a move of that artifact from the development repository to the QA repository. It now becomes available for release into the QA environment.

When that artifact is pulled into the QA environment and successfully passes exploratory testing it is signed off for production by a tester. The artifact will be moved from the QA repository to the production repository, enabling a production release at a later date.

A variant of this strategy is for multiple artifact repositories to be managed by a single repository manager, such as Artifactory or Nexus.

This strategy fulfils the basic need of restricting which artifacts can be pulled into pre-production and production environments, but its reliance upon repository tooling to represent artifact status introduces a number of problems:

  • Reduced feedback – an unknown artifact can only be reported as not found, yet it could be an invalid version, an artifact in an earlier stage, or a failed artifact
  • Orchestrator complexity – the pipeline runner has to manage multiple repositories, knowing which repository to use for which environment
  • Inflexible architecture – if an environment is added to or removed from the value stream the toolchain will have to change
  • Lack of metrics – pipeline activity data is limited to vendor-specific repository data, making it difficult to track wait times and cycle times

A more flexible approach better aligned with Continuous Delivery is to establish artifact status as a first-class concept in the pipeline and introduce per-binary metadata support.

When a single repository is used, all artifacts reside in the same location alongside their versioned metadata, which provides a definitive record of artifact activity throughout the pipeline. This means unknown artifacts can easily be identified, the complexity of the pipeline orchestrator can be reduced, and any value stream design can be supported over time with no changes to the repository itself.

Furthermore, as the collection of artifact metadata stored in the repository indicates which artifact passed/failed which environment at any given point in time, it becomes trivial to pipeline dashboards that can display pending releases, application cycle times, and where delays are occurring in the value stream. This is a crucial enabler of organisational change for Continuous Delivery, as it indicates where bottlenecks are occurring in the value stream – likely between people working in separate teams in separate silos.

Source: http://www.alwaysagileconsulting.com/articles/pipeline-antipattern-artifact-promotion/

1.11.3   Malware

Artifacts have to checked because of compliance automatically against Malware.

Source: 

http://www.alwaysagileconsulting.com/articles/tag/pattern/

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.