8000 fix: multiple pool reads parallelize when possible by harshavardhana · Pull Request #11537 · minio/minio · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

fix: multiple pool reads parallelize when possible #11537

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 16, 2021

Conversation

harshavardhana
Copy link
Member
@harshavardhana harshavardhana commented Feb 15, 2021

Description

fix: multiple pool reads parallelize when possible

Motivation and Context

improve performance when more than one pool
is present for lookups on small files.

How to test this PR?

Nothing special all functionality should work
as is.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Optimization (provides speedup with no functional changes)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • Fixes a regression (If yes, please add commit-id or PR # here)
  • Documentation updated
  • Unit tests added/updated

@harshavardhana harshavardhana changed the title fix: multiple pool reads parallelize fix: multiple pool reads parallelize when possible Feb 16, 2021
@harshavardhana harshavardhana marked this pull request as ready for review February 16, 2021 03:08
@minio-trusted
Copy link
Contributor

Mint Automation

Test Result
mint-large-bucket.sh ✔️
mint-fs.sh ✔️
mint-gateway-s3.sh ✔️
mint-erasure.sh ✔️
mint-dist-erasure.sh ✔️
mint-zoned.sh ✔️
mint-gateway-nas.sh ✔️
mint-compress-encrypt-dist-erasure.sh more...

11537-c51b613/mint-compress-encrypt-dist-erasure.sh.log:

Running with
SERVER_ENDPOINT:      minio-c2.minio.io:32738
ACCESS_KEY:           minio
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp eeec72837e0c:/mint/log /tmp/mint-logs'

(1/15) Running aws-sdk-go tests ... done in 1 seconds
(2/15) Running aws-sdk-java tests ... done in 2 seconds
(3/15) Running aws-sdk-php tests ... done in 44 seconds
(4/15) Running aws-sdk-ruby tests ... done in 4 seconds
(5/15) Running awscli tests ... FAILED in 32 seconds
{
  "name": "awscli",
  "duration": 2718,
  "function": "aws --endpoint-url http://minio-c2.minio.io:32738 s3api copy-object --bucket awscli-mint-test-bucket-25727 --key datafile-1-kB-copy --copy-source awscli-mint-test-bucket-25727/datafile-1-kB\n",
  "status": "FAIL",
  "error": "Hash mismatch expected 084e1383b70fb0c51acc680fef370023, got ac57de7156d7fc25ac1a65f81fa3989b"
}
(5/15) Running healthcheck tests ... done in 0 seconds
(6/15) Running mc tests ... done in 53 seconds
(7/15) Running minio-dotnet tests ... done in 53 seconds
(8/15) Running minio-go tests ... FAILED in 2 minutes and 14 seconds
{
  "args": {
    "destination": {
      "Bucket": "minio-go-test-jhlaomg6muiqnko6",
      "Object": "dstObject",
      "Encryption": {},
      "UserMetadata": null,
      "ReplaceMetadata": false,
      "UserTags": null,
      "ReplaceTags": false,
      "LegalHold": "",
      "Mode": "",
      "RetainUntilDate": "0001-01-01T00:00:00Z",
      "Size": 0,
      "Progress": null
    },
    "source": {
      "Bucket": "minio-go-test-jhlaomg6muiqnko6",
      "Object": "srcObject",
      "VersionID": "",
      "MatchETag": "",
      "NoMatchETag": "",
      "MatchModifiedSince": "0001-01-01T00:00:00Z",
      "MatchUnmodifiedSince": "0001-01-01T00:00:00Z",
      "MatchRange": false,
      "Start": 0,
      "End": 0,
      "Encryption": null
    }
  },
  "duration": 5040,
  "error": "We encountered an internal error, please try again.: cause(s2: corrupt input)",
  "function": "CopyObject(destination, source)",
  "message": "GetObject failed",
  "name": "minio-go: testUnencryptedToSSES3CopyObject",
  "status": "FAIL"
}
(8/15) Running minio-java tests ... FAILED in 1 minutes and 36 seconds
{
  "name": "minio-java",
  "function": "copyObject()",
  "args": "[match etag]",
  "duration": 303,
  "status": "FAIL",
  "error": "error occurred\nErrorResponse(code = PreconditionFailed, message = At least one of the pre-conditions you specified did not hold, bucketName = minio-java-test-31tev6s, objectName = minio-java-test-3ijj7le-copy, resource = /minio-java-test-31tev6s/minio-java-test-3ijj7le-copy, requestId = 16641C4BAC364612, hostId = cac73a33-b013-4921-ab29-b8fba477cfd5)\nrequest={method=PUT, url=http://minio-c2.minio.io:32738/minio-java-test-31tev6s/minio-java-test-3ijj7le-copy, headers=x-amz-copy-source-if-match: 71cff0a060f852067e443ad1e24ae26c-1\nx-amz-copy-source: /minio-java-test-10sckkg/minio-java-test-3ijj7le\nHost: minio-c2.minio.io:32738\nAccept-Encoding: identity\nUser-Agent: MinIO (Linux; amd64) minio-java/8.0.3\nContent-MD5: 1B2M2Y8AsgTpgAmY7PhCfg==\nx-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nx-amz-date: 20210216T032117Z\nAuthorization: AWS4-HMAC-SHA256 Credential=*REDACTED*/20210216/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-copy-source;x-amz-copy-source-if-match;x-amz-date, Signature=*REDACTED*\n}\nresponse={code=412, headers=Accept-Ranges: bytes\nContent-Length: 418\nContent-Security-Policy: block-all-mixed-content\nContent-Type: application/xml\nETag: \"71cff0a060f852067e443ad1e24ae26c\"\nLast-Modified: Tue, 16 Feb 2021 03:21:17 GMT\nServer: MinIO\nVary: Origin\nX-Amz-Request-Id: 16641C4BAC364612\nX-Xss-Protection: 1; mode=block\nDate: Tue, 16 Feb 2021 03:21:17 GMT\n}\n >>> [io.minio.MinioClient.execute(MinioClient.java:775), io.minio.MinioClient.execute(MinioClient.java:563), io.minio.MinioClient.executePut(MinioClient.java:904), io.minio.MinioClient.copyObject(MinioClient.java:1232), FunctionalTest.testCopyObjectMatchETag(FunctionalTest.java:1850), FunctionalTest.copyObject(FunctionalTest.java:2016), FunctionalTest.runObjectTests(FunctionalTest.java:3757), FunctionalTest.runTests(FunctionalTest.java:3783), FunctionalTest.main(FunctionalTest.java:3927)]"
}
(8/15) Running minio-js tests ... done in 53 seconds
(9/15) Running minio-py tests ... done in 3 minutes and 12 seconds
(10/15) Running s3cmd tests ... FAILED in 6 seconds
{
  "name": "s3cmd",
  "duration": "3247",
  "function": "test_put_object_multipart",
  "status": "FAIL",
  "error": "WARNING: MD5 Sums don't match!\nWARNING: Retrying upload of /mint/data/datafile-65-MB\nWARNING: MD5 Sums don't match!\nWARNING: Retrying upload of /mint/data/datafile-65-MB\nWARNING: MD5 Sums don't match!\nWARNING: Retrying upload of /mint/data/datafile-65-MB\nWARNING: MD5 Sums don't match!\nWARNING: Retrying upload of /mint/data/datafile-65-MB\nWARNING: MD5 Sums don't match!\nWARNING: Retrying upload of /mint/data/datafile-65-MB\nWARNING: MD5 Sums don't match!\nWARNING: Too many failures. Giving up on '/mint/data/datafile-65-MB'\nERROR: \nUpload of '/mint/data/datafile-65-MB' part 1 failed. Use\n  /usr/local/bin/s3cmd abortmp s3://s3cmd-test-bucket-32204/s3cmd-test-object-16152 bf624d17-e50f-441c-a79d-3b422692b916\nto abort the upload, or\n  /usr/local/bin/s3cmd --upload-id bf624d17-e50f-441c-a79d-3b422692b916 put ...\nto continue the upload.\nERROR: Upload of '/mint/data/datafile-65-MB' failed too many times (Last reason: )"
}
(10/15) Running s3select tests ... done in 7 seconds
(11/15) Running security tests ... done in 0 seconds

Executed 11 out of 15 tests successfully.

Deleting image on docker hub
Deleting image locally

Copy link
Contributor
@poornas poornas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor
@klauspost klauspost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for i, pool := range z.serverPools {
could also do pools concurrently.

@harshavardhana
Copy link
Member Author
harshavardhana commented Feb 16, 2021

@klauspost I didn't do that on purpose, since there ar 8000 e some top layers that have a dependency on the implementation such as bucket replication - will get to it if no regressions were found in the current "read" only part.

Copy link
Contributor
@klauspost klauspost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. We can always look at improvements later.

@harshavardhana harshavardhana merged commit 7d4a2d2 into minio:master Feb 16, 2021
@harshavardhana harshavardhana deleted the pools branch February 16, 2021 10:43
poornas added a commit to poornas/minio that referenced this pull request Feb 16, 2021
* fix: metacache should only rename entries during cleanup (minio#11503)

To avoid large delays in metacache cleanup, use rename
instead of recursive delete calls, renames are cheaper
move the content to minioMetaTmpBucket and then cleanup
this folder once in 24hrs instead.

If the new cache can replace an existing one, we should
let it replace since that is currently being saved anyways,
this avoids pile up of 1000's of metacache entires for
same listing calls that are not necessary to be stored
on disk.

* turn off http2 for TLS setups for now (minio#11523)

due to lots of issues with x/net/http2, as
well as the bundled h2_bundle.go in the go
runtime should be avoided for now.

golang/go#23559
golang/go#42534
golang/go#43989
golang/go#33425
golang/go#29246

With collection of such issues present, it
make sense to remove HTTP2 support for now

* fix: save ModTime properly in disk cache (minio#11522)

fix minio#11414

* fix: osinfos incomplete in case of warnings (minio#11505)

The function used for getting host information
(host.SensorsTemperaturesWithContext) returns warnings in some cases.

Returning with error in such cases means we miss out on the other useful
information already fetched (os info).

If the OS info has been succesfully fetched, it should always be
included in the output irrespective of whether the other data (CPU
sensors, users) could be fetched or not.

* fix: avoid timed value for network calls (minio#11531)

additionally simply timedValue to have RWMutex
to avoid concurrent calls to DiskInfo() getting
serialized, this has an effect on all calls that
use GetDiskInfo() on the same disks.

Such as getOnlineDisks, getOnlineDisksWithoutHealing

* fix: support IAM policy handling for wildcard actions (minio#11530)

This PR fixes

- allow 's3:versionid` as a valid conditional for
  Get,Put,Tags,Object locking APIs
- allow additional headers missing for object APIs
- allow wildcard based action matching

* fix: in MultiDelete API return MalformedXML upon empty input (minio#11532)

To follow S3 spec

* Update yaml files to latest version RELEASE.2021-02-14T04-01-33Z

* fix: multiple pool reads parallelize when possible (minio#11537)

* Add support for remote tier management

With this change MinIO's ILM supports transitioning objects to a remote tier.
This change includes support for Azure Blob Storage, AWS S3 and Google Cloud
Storage as remote tier storage backends.

Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
Co-authored-by: Krishna Srinivas <krishna@minio.io>
Co-authored-by: Krishnan Parthasarathi <kp@minio.io>

Co-authored-by: Harshavardhana <harsha@minio.io>
Co-authored-by: Poorna Krishnamoorthy <poornas@users.noreply.github.com>
Co-authored-by: Shireesh Anjal <355479+anjalshireesh@users.noreply.github.com>
Co-authored-by: Anis Elleuch <vadmeste@users.noreply.github.com>
Co-authored-by: Minio Trusted <trusted@minio.io>
Co-authored-by: Krishna Srinivas <krishna.srinivas@gmail.com>
Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
Co-authored-by: Krishna Srinivas <krishna@minio.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0