[go: up one dir, main page]
More Web Proxy on the site http://driver.im/Jump to content

Deployments/Holding the train

From Wikitech

Holding the deployment train is not something the Release Engineering team takes lightly. When RelEng holds a deployment train for a production error, we expect all engineers with relevant expertise to be focused on resolving the issue. A quick resolution is beneficial to all engineers as holding the train, counter-intuitively, can create more problems than it solves. Over time the versions of MediaWiki and extensions that are deployed to the cluster will become more widely divergent from the primary development versions (e.g. master) of the code.

We may also pause the train for non-emergency issues which have an expected resolution in the near future. This is done at the discretion of the train conductor, and should be communicated on the blocker task. It is appropriate for issues where user experience will be substantially improved, actions which save developers and deployers substantial work, and changes which increase deployment safety (e.g. by removing logspam).

Issues that hold the train

This is a non-exhaustive list of things that would cause the train to pause or roll back. As always, it's up to the best judgment of SRE and Release Engineering, but the following are representative examples of what we'd take action on:

Security issues

Issues flagged by the security team.

Data loss

Any loss of data.

Major feature regressions

  • Inability to login/logout/create account for a large portion of users
  • Inability to edit for a large portion of users

Performance regressions

  • Page load time
  • Page save/update time

Major stylistic problems affecting all pages

  • Complete loss of UI elements critical for reading and top-level navigation on a skin which is default for mobile or desktop users (in practice these are Vector, Vector 2022 or Minerva skins) e.g. page has no text, all the links are gone, styles are not loaded
  • Issues on opt-in only skins (Timeless, Monobook, Modern, and CologneBlue) should seldom block the train, but should be fixed promptly (e.g. by the end of the week).
    • Exceptions are possible, where editing experience is judged to be severely impacted for a significant fraction of edits (e.g. editing is not loading). When justifying such a blocker please use data wherever possible.
    • Purely cosmetic issues that can easily be patched via site CSS should never block the existing train but should be addressed promptly and potentially block the next train.
  • For other issues, avoid passing individual judgment from rollback and block decisions.
    • Establish a time limit on when a decision needs to be made
    • Include the introducer of the bug, and product owner in the decision making where known.
    • Ideally, the product manager of the product with the regression should take responsibility for the decision.

Error-rate increases (See #Logspam)

  • Any new error messages that occur frequently enough to be noticed where deployers watch for breakage will block the train.
  • If the frequency increases significantly after a deployment then it should be immediately rolled back until the error can be fixed and the branch re-deployed.
  • Even DEBUG / INFO-level logs are a problem. These are especially problematic if the frequency of the messages is high enough to put unnecessary load on the logstash servers.
  • total client error rate graph is in the red zone due to an open UBN ticket.
  • Newly introduced bugs that show up on the mw-client-errors dashboard or mw-client-error editing dashboard at a rate of:
    • Over 100 errors in a 1 hour period (Over 1000 errors in a 12 hr period, Over 14,000 in 7 days)

Deprecations

  • PHP Deprecation messages block the following week's train.

Severity increases with server groups (usually)

group1 servers have ~10x more users than group0 servers, and ~100x fewer users than group2 servers, so smaller issues might justify holding back the group2 rollout while not meriting an immediate revert from group1 or group0. A notable exception is global functionality, such as CentralAuth, OAuth, or CentralNotice, which run code on metawiki (in group1) to support functionality on Wikipedias (in group2), so for issues with such code group1 rollout should be handled more conservatively.

What happens during backport windows while the train is on hold?

Only simple config changes and emergency fixes are allowed during backport windows while we are reverted. This is to reduce the complexity during investigation.

Remember, while we are reverted people are diligently diagnosing and debugging issues; any seemingly unrelated change could in fact affect their investigations.

What happens next?

  • If a blocker was found and addressed before 3pm Pacific Tues/Wed/Thur THEN
    • the planned deploy/rollout can move forward at that time (deployment schedule permitting)
  • If the new wmf.XX version wasn't deployed to group2 (all Wikipedias) on Thursday due to blockers THEN
    • If there is a fix available for deploy, RelEng will attempt to get the train back on track to ensure we adhere as closely as possible to the train schedule.
    • An incident report will be filed to address follow-up actions and process improvements, and,
    • A post-mortem will be conducted.
  • If there are issues affecting performance discovered significantly after the current version of MediaWiki and extensions has been deployed to all wikis (group2, Thursday) THEN
    • The current code version will remain on servers—we will not attempt to rollback to a version > 1 week old, and,
    • The next rollout of the following release will be at the Performance Team's discretion, and,
    • An incident report will be filed to address follow-up actions and process improvements, and,
    • A post-mortem will be conducted.

Train "blocker tasks"

What: For each weekly train version rollout an accompanying task is filed in Phabricator. They all live in the #Train-Deployments tag. You can find the current task at https://train-blockers.toolforge.org.

Purpose: The purpose of these tasks is to track the rollout of the train especially including any blocking issues that may arise (see above). These blocking issues are filed as sub-tasks.

Blocking (sub) tasks types:
  • A task which causes an entire revert/rollback to the previously deployed version and which must be addressed before moving forward.
  • A task which prevents the continued rollout of the new version until it is addressed.
Priority of blocking (sub) tasks:

Tasks which block the train from moving forward or cause it to be rolled back are set to UBN! ("Unbreak Now!") priority, as getting the train moving again should be the highest priority for the person(s)/team responsible for the code in question.

Status of blocking (sub) tasks:

Most times a blocking task must be "Resolved" in Phabricator for the train to move forward. A subset of times the task itself is not resolved because the issue has been worked around in another way, for instance when e.g.: a backport was prepared and merged but that backport is not yet merged in master. The task will normally be closed after that patch is merged into master.

Communication on blocking tasks:

The "train conductor" for that week, or the backup conductor, is responsible for commenting on any blocking (sub) tasks with their assumptions on status and impact, especially if they choose to move the train forward with the task not set to "Resolved" for whatever reason. The reason for this commenting (and potential over communication) is to ensure all parties are aware of all assumptions and decisions.

Maintaining the task series in Phabricator:

Periodically, the release manager will create batches of new tasks in Phabricator for planned upcoming MediaWiki version. This is accomplished by running the phab-train-blocker script in the releng/release repo. For documentation, see: Deployments/Blocking_Tasks

Logspam

Can of Spam on a log

What it is

Logspam is the term we use to describe the category of noisy error messages in our logs. These don't necessarily represent user-facing error conditions, though oftentimes errors are being ignored or aren't a high priority for the responsible parties (when any exist).

Specific error messages that have been identified by deployers and log triagers are tracked in the #Wikimedia-Production-Error Phabricator project.

Why it's a problem

Logspam is a problem because noisy logs make it hard to detect problems quickly when looking at log dashboards.

All deployers need to be able to quickly detect any new problems that are introduced by their newly deployed code. If important error messages are drowned out by logspam then deployers can easily miss more serious issues. If code produces extraneous errors in production logs, then that code is considered broken, even if there is no immediate user-facing impact.

Major causes (and how you can fix them)

Incorrectly categorized log messages

The most common example of this type would be expected (or known) conditions being recorded as exceptional conditions, e.g.: Debug notices or Warnings being logged as Errors. This is an incorrect use of logging and should be corrected.

Notice "Undefined variable", "Undefined index", or "Undefined offset"

These are a common occurrence in PHP code. Whenever you attempt to access a variable or index of an array that doesn't exist, PHP logs a notice. These are coding errors and they need to be fixed. It might be that the input is malformed and the error is in the caller; or it might a mistyped reference; or it might be that the key is allowed to be absent but the developer forgot to access it conditionally.

See also