Skip to content

Commit

Permalink
Rebase to upstream master (#9)
Browse files Browse the repository at this point in the history
* fix: metacache should only rename entries during cleanup (minio#11503)

To avoid large delays in metacache cleanup, use rename
instead of recursive delete calls, renames are cheaper
move the content to minioMetaTmpBucket and then cleanup
this folder once in 24hrs instead.

If the new cache can replace an existing one, we should
let it replace since that is currently being saved anyways,
this avoids pile up of 1000's of metacache entires for
same listing calls that are not necessary to be stored
on disk.

* turn off http2 for TLS setups for now (minio#11523)

due to lots of issues with x/net/http2, as
well as the bundled h2_bundle.go in the go
runtime should be avoided for now.

golang/go#23559
golang/go#42534
golang/go#43989
golang/go#33425
golang/go#29246

With collection of such issues present, it
make sense to remove HTTP2 support for now

* fix: save ModTime properly in disk cache (minio#11522)

fix minio#11414

* fix: osinfos incomplete in case of warnings (minio#11505)

The function used for getting host information
(host.SensorsTemperaturesWithContext) returns warnings in some cases.

Returning with error in such cases means we miss out on the other useful
information already fetched (os info).

If the OS info has been succesfully fetched, it should always be
included in the output irrespective of whether the other data (CPU
sensors, users) could be fetched or not.

* fix: avoid timed value for network calls (minio#11531)

additionally simply timedValue to have RWMutex
to avoid concurrent calls to DiskInfo() getting
serialized, this has an effect on all calls that
use GetDiskInfo() on the same disks.

Such as getOnlineDisks, getOnlineDisksWithoutHealing

* fix: support IAM policy handling for wildcard actions (minio#11530)

This PR fixes

- allow 's3:versionid` as a valid conditional for
  Get,Put,Tags,Object locking APIs
- allow additional headers missing for object APIs
- allow wildcard based action matching

* fix: in MultiDelete API return MalformedXML upon empty input (minio#11532)

To follow S3 spec

* Update yaml files to latest version RELEASE.2021-02-14T04-01-33Z

* fix: multiple pool reads parallelize when possible (minio#11537)

* Add support for remote tier management

With this change MinIO's ILM supports transitioning objects to a remote tier.
This change includes support for Azure Blob Storage, AWS S3 and Google Cloud
Storage as remote tier storage backends.

Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
Co-authored-by: Krishna Srinivas <krishna@minio.io>
Co-authored-by: Krishnan Parthasarathi <kp@minio.io>

Co-authored-by: Harshavardhana <harsha@minio.io>
Co-authored-by: Poorna Krishnamoorthy <poornas@users.noreply.github.com>
Co-authored-by: Shireesh Anjal <355479+anjalshireesh@users.noreply.github.com>
Co-authored-by: Anis Elleuch <vadmeste@users.noreply.github.com>
Co-authored-by: Minio Trusted <trusted@minio.io>
Co-authored-by: Krishna Srinivas <krishna.srinivas@gmail.com>
Co-authored-by: Poorna Krishnamoorthy <poorna@minio.io>
Co-authored-by: Krishna Srinivas <krishna@minio.io>
  • Loading branch information
9 people authored Feb 16, 2021
1 parent b9e6693 commit 0fe802d
Show file tree
Hide file tree
Showing 31 changed files with 787 additions and 271 deletions.
4 changes: 2 additions & 2 deletions Dockerfile.release
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ ARG TARGETARCH
LABEL name="MinIO" \
vendor="MinIO Inc <dev@min.io>" \
maintainer="MinIO Inc <dev@min.io>" \
version="RELEASE.2021-02-11T08-23-43Z" \
release="RELEASE.2021-02-11T08-23-43Z" \
version="RELEASE.2021-02-14T04-01-33Z" \
release="RELEASE.2021-02-14T04-01-33Z" \
summary="MinIO is a High Performance Object Storage, API compatible with Amazon S3 cloud storage service." \
description="MinIO object storage is fundamentally different. Designed for performance and the S3 API, it is 100% open-source. MinIO is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads."

Expand Down
3 changes: 1 addition & 2 deletions cmd/admin-router.go
Original file line number Diff line number Diff line change
Expand Up @@ -110,10 +110,9 @@ func registerAdminRouter(router *mux.Router, enableConfigOps, enableIAMOps bool)
// -- IAM APIs --

// Add policy IAM
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(httpTraceHdrs(adminAPI.AddCannedPolicy)).Queries("name", "{name:.*}")
adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-canned-policy").HandlerFunc(httpTraceAll(adminAPI.AddCannedPolicy)).Queries("name", "{name:.*}")

// Add user IAM

adminRouter.Methods(http.MethodGet).Path(adminVersion + "/accountinfo").HandlerFunc(httpTraceAll(adminAPI.AccountInfoHandler))

adminRouter.Methods(http.MethodPut).Path(adminVersion+"/add-user").HandlerFunc(httpTraceHdrs(adminAPI.AddUser)).Queries("accessKey", "{accessKey:.*}")
Expand Down
6 changes: 6 additions & 0 deletions cmd/bucket-handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -427,6 +427,12 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
deleteObjectsFn = api.CacheAPI().DeleteObjects
}

// Return Malformed XML as S3 spec if the list of objects is empty
if len(deleteObjects.Objects) == 0 {
writeErrorResponse(ctx, w, errorCodes.ToAPIErr(ErrMalformedXML), r.URL, guessIsBrowserReq(r))
return
}

var objectsToDelete = map[ObjectToDelete]int{}
getObjectInfoFn := objectAPI.GetObjectInfo
if api.CacheAPI() != nil {
Expand Down
14 changes: 0 additions & 14 deletions cmd/bucket-listobjects-handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ import (
"github.com/minio/minio/cmd/logger"

"github.com/minio/minio/pkg/bucket/policy"
"github.com/minio/minio/pkg/handlers"
"github.com/minio/minio/pkg/sync/errgroup"
)

Expand Down Expand Up @@ -295,10 +294,6 @@ func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http
return proxyRequest(ctx, w, r, ep)
}

func proxyRequestByStringHash(ctx context.Context, w http.ResponseWriter, r *http.Request, str string) (success bool) {
return proxyRequestByNodeIndex(ctx, w, r, crcHashMod(str, len(globalProxyEndpoints)))
}

// ListObjectsV1Handler - GET Bucket (List Objects) Version 1.
// --------------------------
// This implementation of the GET operation returns some or all (up to 10000)
Expand Down Expand Up @@ -337,15 +332,6 @@ func (api objectAPIHandlers) ListObjectsV1Handler(w http.ResponseWriter, r *http
return
}

// Forward the request using Source IP or bucket
forwardStr := handlers.GetSourceIPFromHeaders(r)
if forwardStr == "" {
forwardStr = bucket
}
if proxyRequestByStringHash(ctx, w, r, forwardStr) {
return
}

listObjects := objectAPI.ListObjects

// Inititate a list objects operation based on the input params.
Expand Down
11 changes: 9 additions & 2 deletions cmd/config-current.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package cmd

import (
"context"
"crypto/tls"
"fmt"
"strings"
"sync"
Expand Down Expand Up @@ -295,7 +296,10 @@ func validateConfig(s config.Config, setDriveCounts []int) error {
}
}
{
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), NewGatewayHTTPTransport())
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), newCustomHTTPTransportWithHTTP2(
&tls.Config{
RootCAs: globalRootCAs,
}, defaultDialTimeout)())
if err != nil {
return err
}
Expand Down Expand Up @@ -471,7 +475,10 @@ func lookupConfigs(s config.Config, setDriveCounts []int) {
}
}

kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), NewGatewayHTTPTransport())
kmsCfg, err := crypto.LookupConfig(s, globalCertsCADir.Get(), newCustomHTTPTransportWithHTTP2(
&tls.Config{
RootCAs: globalRootCAs,
}, defaultDialTimeout)())
if err != nil {
logger.LogIf(ctx, fmt.Errorf("Unable to setup KMS config: %w", err))
}
Expand Down
12 changes: 7 additions & 5 deletions cmd/disk-cache-backend.go
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,6 @@ func (m *cacheMeta) ToObjectInfo(bucket, object string) (o ObjectInfo) {
}

// We set file info only if its valid.
o.ModTime = m.Stat.ModTime
o.Size = m.Stat.Size
o.ETag = extractETag(m.Meta)
o.ContentType = m.Meta["content-type"]
Expand All @@ -122,6 +121,12 @@ func (m *cacheMeta) ToObjectInfo(bucket, object string) (o ObjectInfo) {
o.Expires = t.UTC()
}
}
if mtime, ok := m.Meta["last-modified"]; ok {
if t, e = time.Parse(http.TimeFormat, mtime); e == nil {
o.ModTime = t.UTC()
}
}

// etag/md5Sum has already been extracted. We need to
// remove to avoid it from appearing as part of user-defined metadata
o.UserDefined = cleanMetadata(m.Meta)
Expand Down Expand Up @@ -506,9 +511,7 @@ func (c *diskCache) statCache(ctx context.Context, cacheObjPath string) (meta *c
}
// get metadata of part.1 if full file has been cached.
partial = true
fi, err := os.Stat(pathJoin(cacheObjPath, cacheDataFile))
if err == nil {
meta.Stat.ModTime = atime.Get(fi)
if _, err := os.Stat(pathJoin(cacheObjPath, cacheDataFile)); err == nil {
partial = false
}
return meta, partial, meta.Hits, nil
Expand Down Expand Up @@ -570,7 +573,6 @@ func (c *diskCache) saveMetadata(ctx context.Context, bucket, object string, met
}
}
m.Stat.Size = actualSize
m.Stat.ModTime = UTCNow()
if !incHitsOnly {
// reset meta
m.Meta = meta
Expand Down
1 change: 1 addition & 0 deletions cmd/disk-cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,7 @@ func getMetadata(objInfo ObjectInfo) map[string]string {
if !objInfo.Expires.Equal(timeSentinel) {
metadata["expires"] = objInfo.Expires.Format(http.TimeFormat)
}
metadata["last-modified"] = objInfo.ModTime.Format(http.TimeFormat)
for k, v := range objInfo.UserDefined {
metadata[k] = v
}
Expand Down
3 changes: 2 additions & 1 deletion cmd/disk-cache_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package cmd

import (
"testing"
"time"
)

// Tests ToObjectInfo function.
Expand All @@ -27,7 +28,7 @@ func TestCacheMetadataObjInfo(t *testing.T) {
if objInfo.Size != 0 {
t.Fatal("Unexpected object info value for Size", objInfo.Size)
}
if !objInfo.ModTime.Equal(timeSentinel) {
if !objInfo.ModTime.Equal(time.Time{}) {
t.Fatal("Unexpected object info value for ModTime ", objInfo.ModTime)
}
if objInfo.IsDir {
Expand Down
48 changes: 42 additions & 6 deletions cmd/erasure-multipart.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ import (
"sort"
"strconv"
"strings"
"sync"
"time"

"github.com/minio/minio-go/v7/pkg/set"
Expand Down Expand Up @@ -91,12 +92,47 @@ func (er erasureObjects) removeObjectPart(bucket, object, uploadID, dataDir stri
// Clean-up the old multipart uploads. Should be run in a Go routine.
func (er erasureObjects) cleanupStaleUploads(ctx context.Context, expiry time.Duration) {
// run multiple cleanup's local to this server.
var wg sync.WaitGroup
for _, disk := range er.getLoadBalancedLocalDisks() {
if disk != nil {
er.cleanupStaleUploadsOnDisk(ctx, disk, expiry)
return
wg.Add(1)
go func(disk StorageAPI) {
defer wg.Done()
er.cleanupStaleUploadsOnDisk(ctx, disk, expiry)
}(disk)
}
}
wg.Wait()
}

func (er erasureObjects) renameAll(ctx context.Context, bucket, prefix string) {
var wg sync.WaitGroup
for _, disk := range er.getDisks() {
if disk == nil {
continue
}
wg.Add(1)
go func(disk StorageAPI) {
defer wg.Done()
disk.RenameFile(ctx, bucket, prefix, minioMetaTmpBucket, mustGetUUID())
}(disk)
}
wg.Wait()
}

func (er erasureObjects) deleteAll(ctx context.Context, bucket, prefix string) {
var wg sync.WaitGroup
for _, disk := range er.getDisks() {
if disk == nil {
continue
}
wg.Add(1)
go func(disk StorageAPI) {
defer wg.Done()
disk.Delete(ctx, bucket, prefix, true)
}(disk)
}
wg.Wait()
}

// Remove the old multipart uploads on the given disk.
Expand All @@ -118,7 +154,7 @@ func (er erasureObjects) cleanupStaleUploadsOnDisk(ctx context.Context, disk Sto
continue
}
if now.Sub(fi.ModTime) > expiry {
er.deleteObject(ctx, minioMetaMultipartBucket, uploadIDPath, fi.Erasure.DataBlocks+1)
er.renameAll(ctx, minioMetaMultipartBucket, uploadIDPath)
}
}
}
Expand All @@ -127,12 +163,12 @@ func (er erasureObjects) cleanupStaleUploadsOnDisk(ctx context.Context, disk Sto
return
}
for _, tmpDir := range tmpDirs {
fi, err := disk.ReadVersion(ctx, minioMetaTmpBucket, tmpDir, "", false)
vi, err := disk.StatVol(ctx, pathJoin(minioMetaTmpBucket, tmpDir))
if err != nil {
continue
}
if now.Sub(fi.ModTime) > expiry {
er.deleteObject(ctx, minioMetaTmpBucket, tmpDir, fi.Erasure.DataBlocks+1)
if now.Sub(vi.Created) > expiry {
er.deleteAll(ctx, minioMetaTmpBucket, tmpDir)
}
}
}
Expand Down
12 changes: 7 additions & 5 deletions cmd/erasure-object.go
Original file line number Diff line number Diff line change
Expand Up @@ -372,12 +372,14 @@ func (er erasureObjects) getObject(ctx context.Context, bucket, object string, s

// GetObjectInfo - reads object metadata and replies back ObjectInfo.
func (er erasureObjects) GetObjectInfo(ctx context.Context, bucket, object string, opts ObjectOptions) (info ObjectInfo, err error) {
// Lock the object before reading.
lk := er.NewNSLock(bucket, object)
if err := lk.GetRLock(ctx, globalOperationTimeout); err != nil {
return ObjectInfo{}, err
if !opts.NoLock {
// Lock the object before reading.
lk := er.NewNSLock(bucket, object)
if err := lk.GetRLock(ctx, globalOperationTimeout); err != nil {
return ObjectInfo{}, err
}
defer lk.RUnlock()
}
defer lk.RUnlock()

return er.getObjectInfo(ctx, bucket, object, opts)
}
Expand Down
Loading

0 comments on commit 0fe802d

Please sign in to comment.