8000 PATCH RELEASE Increase API max task to 20 + limit max healthy to 120% by leite08 · Pull Request #3606 · metriport/metriport · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

PATCH RELEASE Increase API max task to 20 + limit max healthy to 120% #3606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 3, 2025

Conversation

leite08
Copy link
Member
@leite08 leite08 commented Apr 3, 2025

Ref. metriport/metriport-internal#799

Dependencies

Description

Increase API max task to 20 + limit max healthy to 120% - context

Testing

none

Release Plan

  • ⚠️ Points to master
  • Merge upstream
  • Merge this

Summary by CodeRabbit

  • New Features
    • Enhanced production capacity to better serve increased demand.
    • Updated service health-check thresholds to offer improved performance and reliability.
    • Optimized CPU scaling settings for more responsive load management.

Ref. metriport/metriport-internal#799

Signed-off-by: Rafael Leite <2132564+leite08@users.noreply.github.com>
Copy link
coderabbitai bot commented Apr 3, 2025

Walkthrough

The changes modify the ECS service configuration in the API service by updating task limits and scaling parameters. In production, the maximum task count increases from 16 to 20. The service creation function now includes two additional properties, maxHealthyPercent (set to 120) and minHealthyPercent (set to 80) for health checks. Additionally, the CPU scaling threshold has been lowered from 80% to 60%, adjusting scaling responsiveness.

Changes

File Change Summary
packages/infra/.../api-service.ts - Updated production maxTaskCount from 16 to 20 in getEnvSpecificSettings
- Added maxHealthyPercent: 120 and minHealthyPercent: 80 in createAPIService
- Lowered targetUtilizationPercent from 80 to 60

Sequence Diagram(s)

sequenceDiagram
    participant Env as Environment
    participant API as APIService
    participant ECS as ECS Configuration

    Env->>API: Request environment-specific settings
    API-->>Env: Return configuration (maxTaskCount = 20 for prod)
    API->>API: Prepare service creation with new settings
    note right of API: Add health check properties:
    note right of API: - maxHealthyPercent: 120<br>- minHealthyPercent: 80
    API->>ECS: Create API service with CPU scaling (targetUtilizationPercent = 60)
    ECS-->>API: Confirm service creation with updated configuration
Loading

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4575394 and be7ea44.

📒 Files selected for processing (1)
  • packages/infra/lib/api-stack/api-service.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.ts`: - Use the Onion Pattern to organize a package's code in layers - Try to use immutable code and avoid sharing state across different functions, objects, and systems - Try...

**/*.ts: - Use the Onion Pattern to organize a package's code in layers

  • Try to use immutable code and avoid sharing state across different functions, objects, and systems
  • Try to build code that's idempotent whenever possible
  • Prefer functional programming style functions: small, deterministic, 1 input, 1 output
  • Minimize coupling / dependencies
  • Avoid modifying objects received as parameter
  • Only add comments to code to explain why something was done, not how it works
  • Naming
    • classes, enums: PascalCase
    • constants, variables, functions: camelCase
    • file names: kebab-case
    • table and column names: snake_case
    • Use meaningful names, so whoever is reading the code understands what it means
    • Don’t use negative names, like notEnabled, prefer isDisabled
    • For numeric values, if the type doesn’t convey the unit, add the unit to the name
  • Typescript
    • Use types
    • Prefer const instead of let
    • Avoid any and casting from any to other types
    • Type predicates: only applicable to narrow down the type, not to force a complete type conversion
    • Prefer deconstructing parameters for functions instead of multiple parameters that might be of
      the same type
    • Don’t use null inside the app, only on code interacting with external interfaces/services,
      like DB and HTTP; convert to undefined before sending inwards into the code
    • Use async/await instead of .then()
    • Use the strict equality operator ===, don’t use abstract equality operator ==
    • When calling a Promise-returning function asynchronously (i.e., not awaiting), use .catch() to
      handle errors (see processAsyncError and emptyFunction depending on the case)
    • Date and Time
      • Always use buildDayjs() to create dayjs instances
      • Prefer dayjs.duration(...) to create duration consts and keep them as duration
  • Prefer Nullish Coalesce (??) than the OR operator (||) to provide a default value
  • Avoid creating arrow functions
  • Use truthy syntax instead of in - i.e., if (data.link) not if ('link' in data)
  • Error handling
    • Pass the original error as the new one’s cause so the stack trace is persisted
    • Error messages should have a static message - add dynamic data to MetriportError's additionalInfo prop
    • Avoid sending multiple events to Sentry for a single error
  • Global constants and variables
    • Move literals to constants declared after imports when possible (avoid magic numbers)
    • Avoid shared, global objects
  • Avoid using console.log and console.error in packages other than utils, infra and shared,
    and try to use out().log instead
  • Avoid multi-line logs
    • don't send objects as a second parameter to console.log() or out().log()
    • don't create multi-line strings when using JSON.stringify()
  • Use eslint to enforce code style
  • Use prettier to format code
  • max column length is 100 chars
  • multi-line comments use /** */
  • scripts: top-level comments go after the import
  • packages/infra/lib/api-stack/api-service.ts
🔇 Additional comments (3)
packages/infra/lib/api-stack/api-service.ts (3)

55-55: Appropriate increase of max task count for production

Increasing the maxTaskCount from 16 to 20 allows for higher horizontal scaling capacity in production, supporting the PR objective to increase API max task count. This change provides more headroom for handling traffic spikes.


317-318: Good addition of healthy percent parameters

Adding maxHealthyPercent: 120 and minHealthyPercent: 80 properly implements the PR objective to limit max healthy to 120%. This ensures that during deployments, the service maintains at least 80% of desired task count while allowing scaling up to 120% to handle the deployment transition seamlessly.


493-493:

✅ Verification successful

Lower CPU scaling threshold improves responsiveness

Reducing the targetUtilizationPercent from 80% to 60% will make auto-scaling more responsive to increasing load. This is a good intermediate step toward the goal mentioned in your previous comment about potentially needing this value even lower (20-25%).


🏁 Script executed:

#!/bin/bash
# Check if there are any other scaling parameters in the codebase that might need adjustment for consistency

# Look for other CPU scaling threshold configurations
rg -A 5 "targetUtilizationPercent" --glob "*.ts"

# Look for other service scaling configurations
rg -A 5 "autoScaleTaskCount" --glob "*.ts"

Length of output: 5933


Auto-scaling CPU Threshold Update Verified

The change lowering the CPU scaling threshold from 80% to 60% in packages/infra/lib/api-stack/api-service.ts is consistent with similar adjustments made in other stacks (e.g., in fhir-converter-service.ts and terminology-server-service.ts). Notably, the memory scaling configuration still targets 80%, which appears intentional. This intermediate step should indeed improve responsiveness under increasing load while leaving room for a potentially even lower threshold in the future.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -488,7 +490,7 @@ export function createAPIService({
maxCapacity: maxTaskCount,
});
scaling.scaleOnCpuUtilization("autoscale_cpu", {
targetUtilizationPercent: 80,
targetUtilizationPercent: 60,
Copy link
Member Author
@leite08 leite08 Apr 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prob won't make a diff, we prob need to get this down to 20-25% so auto scaling happens, as this is based on the cluster avg... but taking one step at a time

@leite08 leite08 marked this pull request as ready for review April 3, 2025 18:39
@leite08 leite08 added this pull request to the merge queue Apr 3, 2025
Merged via the queue into master with commit c9e5ce3 Apr 3, 2025
13 checks passed
8000
@leite08 leite08 deleted the 799-increase-task-count-limit-max-healthy branch April 3, 2025 18:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0