8000 RELEASE Update API task count and auto-scale threshold by leite08 · Pull Request #3768 · metriport/metriport · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

RELEASE Update API task count and auto-scale threshold #3768

New 10000 issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 1, 2025
Merged

Conversation

leite08
Copy link
Member
@leite08 leite08 commented May 1, 2025

Issues:

Dependencies

none

Description

Testing

Check each PR.

Release Plan

  • ⚠️ Points to master
  • Merge this

Summary by CodeRabbit

  • Chores
    • Updated deployment and autoscaling settings for the production environment to improve service availability and resource scaling responsiveness.

leite08 added 3 commits April 30, 2025 21:34
Ref eng-226

Signed-off-by: Rafael Leite <2132564+leite08@users.noreply.github.com>
Ref eng-226

Signed-off-by: Rafael Leite <2132564+leite08@users.noreply.github.com>
ENG-226 Set max ECS task count on restart + Update autoscale threshold
Copy link
coderabbitai bot commented May 1, 2025

Walkthrough

The changes adjust the deployment and autoscaling configuration for the ECS service in the production environment. Specifically, the maximum healthy task percentage is raised from 120% to 160%, and the minimum healthy task percentage is reduced from 80% to 70%. Additionally, the target utilization thresholds for autoscaling are lowered: CPU from 60% to 10% and memory from 80% to 20%. No other logic or control flow is affected.

Changes

File(s) Change Summary
packages/infra/lib/api-stack/api-service.ts Increased maxHealthyPercent to 160% and decreased minHealthyPercent to 70% for production ECS deployments. Lowered autoscaling CPU target to 10% and memory target to 20%. No other logic modified.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ECS Service
    participant Autoscaler

    User->>ECS Service: Deploy new version (production)
    ECS Service->>ECS Service: Apply new max/min healthy percentages (160%/70%)
    ECS Service->>Autoscaler: Monitor CPU/memory utilization
    Autoscaler->>ECS Service: Scale tasks if CPU > 10% or memory > 20%
Loading

Possibly related PRs


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9be536 and a08d0cb.

📒 Files selected for processing (1)
  • packages/infra/lib/api-stack/api-service.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.ts`: - Use the Onion Pattern to organize a package's code in layers - Try to use immutable code and avoid sharing state across different functions, objects, and systems - Try...

**/*.ts: - Use the Onion Pattern to organize a package's code in layers

  • Try to use immutable code and avoid sharing state across different functions, objects, and systems
  • Try to build code that's idempotent whenever possible
  • Prefer functional programming style functions: small, deterministic, 1 input, 1 output
  • Minimize coupling / dependencies
  • Avoid modifying objects received as parameter
  • Only add comments to code to explain why something was done, not how it works
  • Naming
    • classes, enums: PascalCase
    • constants, variables, functions: camelCase
    • file names: kebab-case
    • table and column names: snake_case
    • Use meaningful names, so whoever is reading the code understands what it means
    • Don’t use negative names, like notEnabled, prefer isDisabled
    • For numeric values, if the type doesn’t convey the unit, add the unit to the name
  • Typescript
    • Use types
    • Prefer const instead of let
    • Avoid any and casting from any to other types
    • Type predicates: only applicable to narrow down the type, not to force a complete type conversion
    • Prefer deconstructing parameters for functions instead of multiple parameters that might be of
      the same type
    • Don’t use null inside the app, only on code interacting with external interfaces/services,
      like DB and HTTP; convert to undefined before sending inwards into the code
    • Use async/await instead of .then()
    • Use the strict equality operator ===, don’t use abstract equality operator ==
    • When calling a Promise-returning function asynchronously (i.e., not awaiting), use .catch() to
      handle errors (see processAsyncError and emptyFunction depending on the case)
    • Date and Time
      • Always use buildDayjs() to create dayjs instances
      • Prefer dayjs.duration(...) to create duration consts and keep them as duration
  • Prefer Nullish Coalesce (??) than the OR operator (||) to provide a default value
  • Avoid creating arrow functions
  • Use truthy syntax instead of in - i.e., if (data.link) not if ('link' in data)
  • Error handling
    • Pass the original error as the new one’s cause so the stack trace is persisted
    • Error messages should have a static message - add dynamic data to MetriportError's additionalInfo prop
    • Avoid sending multiple events to Sentry for a single error
  • Global constants and variables
    • Move literals to constants declared after imports when possible (avoid magic numbers)
    • Avoid shared, global objects
  • Avoid using console.log and console.error in packages other than utils, infra and shared,
    and try to use out().log instead
  • Avoid multi-line logs
    • don't send objects as a second parameter to console.log() or out().log()
    • don't create multi-line strings when using JSON.stringify()
  • Use eslint to enforce code style
  • Use prettier to format code
  • max column length is 100 chars
  • multi-line comments use /** */
  • scripts: top-level comments go after the import
  • packages/infra/lib/api-stack/api-service.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: check-pr / lint-build-test
  • GitHub Check: Analyze (javascript)
  • GitHub Check: infra-api-lambdas / deploy
🔇 Additional comments (3)
packages/infra/lib/api-stack/api-service.ts (3)

528-528:

❓ Verification inconclusive

CPU autoscaling threshold has been significantly lowered.

The CPU target utilization for autoscaling has been reduced from 60% to 10%, which will trigger scaling much earlier.

This is an extremely aggressive scaling configuration that will add new tasks when CPU utilization is just 10%. While this ensures responsiveness to load, it may lead to resource inefficiency and higher costs. Consider if this low threshold is truly necessary:


🏁 Script executed: 8000

#!/bin/bash
# Check recent CPU utilization patterns to evaluate if this threshold is appropriate
aws cloudwatch get-metric-statistics --namespace AWS/ECS --metric-name CPUUtilization --dimensions Name=ClusterName,Value=$(aws ecs list-clusters | jq -r '.clusterArns[0]' | cut -d/ -f2) Name=ServiceName,Value=$(aws ecs list-services --cluster $(aws ecs list-clusters | jq -r '.clusterArns[0]') | jq -r '.serviceArns[0]' | cut -d/ -f3) --start-time $(date -u -v-1d +%Y-%m-%dT%H:%M:%SZ) --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ) --period 3600 --statistics Average

Length of output: 938


Revisit CPU‐scale threshold (packages/infra/lib/api‐stack/api‐service.ts:528)
Reducing targetUtilizationPercent from 60% to 10% will cause new tasks to spin up at very low CPU load—potentially driving up task counts and cost. Please confirm this aggressiveness is intentional by reviewing recent CPU usage patterns (via CloudWatch console or valid AWS CLI environment) or load-test data before merging.

  • Location: line 528 in api-service.ts (targetUtilizationPercent: 10)

60-61:

❓ Verification inconclusive

Deployment strategy has been adjusted to allow more flexibility.

The changes to deployment parameters will allow for faster deployments with a wider buffer between minimum and maximum task counts during deployment.

With a higher max (160%) and lower min (70%), this configuration allows for more aggressive rolling updates. Ensure this aligns with your deployment strategy and that you have sufficient resources to support 160% of tasks during deployments:


🏁 Script executed:

#!/bin/bash
# Check the current ECS service settings to confirm desired task count
aws ecs describe-services --cluster $(aws ecs list-clusters | jq -r '.clusterArns[0]') --services $(aws ecs list-services --cluster $(aws ecs list-clusters | jq -r '.clusterArns[0]') | jq -r '.serviceArns[0]') | jq '.services[0].desiredCount, .services[0].deploymentConfiguration'

Length of output: 663


Ensure ECS Deployment Strategy Aligns with Cluster Capacity

The updated parameters (maxHealthyPercent: 160, minHealthyPercent: 70) allow more aggressive rolling updates but will temporarily require up to 160% of your usual task count. Please verify the following before merging:

  • ECS cluster has enough spare capacity (or Capacity Providers/Auto Scaling) to handle 160% of desired tasks during deployments.
  • Any associated costs for running extra tasks are acceptable.
  • Deployments have been tested in a staging environment to observe actual resource usage and deployment behavior.

Affected file:
packages/infra/lib/api-stack/api-service.ts
Lines 60–61:

  maxHealthyPercent: 160,
  minHealthyPercent: 70,

533-533:

❓ Verification inconclusive

Memory autoscaling threshold has been significantly lowered.

The memory target utilization for autoscaling has been reduced from 80% to 20%, which will trigger scaling much earlier.

Similar to the CPU threshold change, this 20% memory threshold is very aggressive and could lead to resource inefficiency. Consider if historical data supports the need for such early scaling:


🏁 Script executed:

#!/bin/bash
# Check recent memory utilization patterns to evaluate if this threshold is appropriate
aws cloudwatch get-metric-statistics --namespace AWS/ECS --metric-name MemoryUtilization --dimensions Name=ClusterName,Value=$(aws ecs list-clusters | jq -r '.clusterArns[0]' | cut -d/ -f2) Name=ServiceName,Value=$(aws ecs list-services --cluster $(aws ecs list-clusters | jq -r '.clusterArns[0]') | jq -r '.serviceArns[0]' | cut -d/ -f3) --start-time $(date -u -v-1d +%Y-%m-%dT%H:%M:%SZ) --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ) --period 3600 --statistics Average

Length of output: 941


Manual Memory Utilization Check Required

The memory target utilization has been lowered from 80% to 20% (packages/infra/lib/api-stack/api-service.ts @ line 533), which may trigger autoscaling far earlier than necessary and drive up costs. We couldn’t pull CloudWatch data in this environment—please verify against your historical MemoryUtilization metrics that a 20% threshold is justified for your workload.

• File: packages/infra/lib/api-stack/api-service.ts
Line: 533 (targetUtilizationPercent: 20,)

Example CLI (run in your AWS environment):

aws cloudwatch get-metric-statistics \
  --namespace AWS/ECS \
  --metric-name MemoryUtilization \
  --dimensions Name=ClusterName,Value=<your-cluster> Name=ServiceName,Value=<your-service> \
  --start-time $(date -u -d '1 day ago' +%Y-%m-%dT%H:%M:%SZ) \
  --end-time   $(date -u +%Y-%m-%dT%H:%M:%SZ) \
  --period 3600 \
  --statistics Average
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@leite08 leite08 marked this pull request as ready for review May 1, 2025 12:32
@leite08 leite08 added this pull request to the merge queue May 1, 2025
Merged via the queue into master with commit e3a6eed May 1, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0