Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AzDatalake] Directory Client Implementation #21283

Merged
merged 64 commits into from
Jul 31, 2023
Merged
Show file tree
Hide file tree
Changes from 62 commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
e8167a2
Enable gocritic during linting (#20715)
jhendrixMSFT Apr 28, 2023
86627ae
Cosmos DB: Enable merge support (#20716)
ealsur Apr 28, 2023
8ac8c6d
[azservicebus, azeventhubs] Stress test and logging improvement (#20710)
richardpark-msft May 1, 2023
9111616
update proxy version (#20712)
azure-sdk May 1, 2023
d6bf190
Return an error when you try to send a message that's too large. (#20…
richardpark-msft May 1, 2023
e2693bd
Changes in test that is failing in pipeline (#20693)
siminsavani-msft May 2, 2023
03f0ac3
[azservicebus, azeventhubs] Treat 'entity full' as a fatal error (#20…
richardpark-msft May 2, 2023
838842d
[azservicebus/azeventhubs] Redirect stderr and stdout to tee (#20726)
richardpark-msft May 3, 2023
20b4dd8
Update changelog with latest features (#20730)
jhendrixMSFT May 3, 2023
745d967
pass along the artifact name so we can override it later (#20732)
azure-sdk May 3, 2023
6dfd0cb
[azeventhubs] Fixing checkpoint store race condition (#20727)
richardpark-msft May 3, 2023
ed7f3c7
Fix azidentity troubleshooting guide link (#20736)
chlowell May 3, 2023
b2cddab
[Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 (…
Alancere May 4, 2023
2a8d96d
add sdk/resourcemanager/postgresql/armpostgresql live test (#20685)
Alancere May 4, 2023
0d22aed
add sdk/resourcemanager/eventhub/armeventhub live test (#20686)
Alancere May 4, 2023
5fa7df4
add sdk/resourcemanager/compute/armcompute live test (#20048)
Alancere May 4, 2023
c005ed6
sdk/resourcemanager/network/armnetwork live test (#20331)
Alancere May 4, 2023
36f766d
add sdk/resourcemanager/cosmos/armcosmos live test (#20705)
Alancere May 4, 2023
9c9d62a
Increment package version after release of azcore (#20740)
azure-sdk May 4, 2023
8bc3450
[azeventhubs] Improperly resetting etag in the checkpoint store (#20737)
richardpark-msft May 4, 2023
e1a6152
Eng workflows sync and branch cleanup additions (#20743)
azure-sdk May 4, 2023
04b463d
[azeventhubs] Latest start position can also be inclusive (ie, get th…
richardpark-msft May 4, 2023
8849196
Update GitHubEventProcessor version and remove pull_request_review pr…
azure-sdk May 5, 2023
27f5ee0
Rename DisableAuthorityValidationAndInstanceDiscovery (#20746)
chlowell May 5, 2023
2eec707
fix (#20707)
Alancere May 6, 2023
22db2d4
AzFile (#20739)
souravgupta-msft May 8, 2023
0cbfd88
azfile: Fixing connection string parsing logic (#20798)
souravgupta-msft May 8, 2023
d54fb08
[azadmin] fix flaky test (#20758)
gracewilcox May 8, 2023
ad8ebd9
Prepare azidentity v1.3.0 for release (#20756)
chlowell May 8, 2023
e2a6f70
Fix broken podman link (#20801)
azure-sdk May 8, 2023
a59d912
[azquery] update doc comments (#20755)
gracewilcox May 8, 2023
bd3b467
Fixed contribution section (#20752)
bobtabor-msft May 8, 2023
132a01a
[azeventhubs,azservicebus] Some API cleanup, renames (#20754)
richardpark-msft May 8, 2023
8db51ca
Add supporting features to enable distributed tracing (#20301) (#20708)
jhendrixMSFT May 9, 2023
4a66b4f
Restore ARM CAE support for azcore beta (#20657)
chlowell May 9, 2023
7d4a3cb
Upgrade to stable azcore (#20808)
chlowell May 9, 2023
068c3be
Increment package version after release of data/azcosmos (#20807)
azure-sdk May 9, 2023
8e0f66e
Updating changelog (#20810)
souravgupta-msft May 9, 2023
ce926c4
Add fake package to azcore (#20711)
jhendrixMSFT May 9, 2023
1a145c5
Updating CHANGELOG.md (#20809)
siminsavani-msft May 9, 2023
90dfc5c
changelog (#20811)
tasherif-msft May 9, 2023
c7eda59
Increment package version after release of storage/azfile (#20813)
azure-sdk May 9, 2023
7fac0b5
Update changelog (azblob) (#20815)
siminsavani-msft May 9, 2023
498a2ef
[azquery] migration guide (#20742)
gracewilcox May 9, 2023
ccb967e
Increment package version after release of monitor/azquery (#20820)
azure-sdk May 9, 2023
f4e6a22
[keyvault] prep for release (#20819)
gracewilcox May 10, 2023
8fd8eda
Merge branch 'main' into feature/azdatalake
tasherif-msft May 11, 2023
c94fa00
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft May 11, 2023
fc0b2b5
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jun 12, 2023
6fb1694
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jun 19, 2023
4f7fe43
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jun 26, 2023
3dac9d0
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jul 4, 2023
a0a861b
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jul 7, 2023
124e27e
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jul 19, 2023
0f5a52c
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jul 24, 2023
81dabb1
Merge remote-tracking branch 'upstream/feature/azdatalake' into featu…
tasherif-msft Jul 27, 2023
cd3b230
added dir methods
tasherif-msft Jul 27, 2023
819dbce
small fixes
tasherif-msft Jul 27, 2023
5d7c936
added rescursive set acl methods
tasherif-msft Jul 27, 2023
a258f6a
recursive support
tasherif-msft Jul 27, 2023
88094af
added sas and tests
tasherif-msft Jul 28, 2023
832f7d1
comment
tasherif-msft Jul 28, 2023
ac9992e
comment
tasherif-msft Jul 28, 2023
5a7a3cd
fix const
tasherif-msft Jul 31, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion sdk/storage/azdatalake/assets.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
"AssetsRepo": "Azure/azure-sdk-assets",
"AssetsRepoPrefixPath": "go",
"TagPrefix": "go/storage/azdatalake",
"Tag": "go/storage/azdatalake_db1de4a48b"
"Tag": "go/storage/azdatalake_9dd1cc3e0e"
}
199 changes: 167 additions & 32 deletions sdk/storage/azdatalake/directory/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,17 @@ import (
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas"
"net/http"
"net/url"
"strings"
"time"
)

// ClientOptions contains the optional parameters when creating a Client.
Expand All @@ -38,7 +45,7 @@ func NewClient(directoryURL string, cred azcore.TokenCredential, options *Client
}
base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts)

azClient, err := azcore.NewClient(shared.FileClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions)
azClient, err := azcore.NewClient(shared.DirectoryClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -154,6 +161,10 @@ func (d *Client) blobClient() *blockblob.Client {
return blobClient
}

func (d *Client) getClientOptions() *base.ClientOptions {
return base.GetCompositeClientOptions((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(d))
}

func (d *Client) sharedKey() *exported.SharedKeyCredential {
return base.SharedKeyComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blockblob.Client])(d))
}
Expand All @@ -174,65 +185,189 @@ func (d *Client) BlobURL() string {

//TODO: create method to get file client - this will require block blob to have a method to get another block blob

// Create creates a new directory (dfs1).
// Create creates a new file (dfs1).
tasherif-msft marked this conversation as resolved.
Show resolved Hide resolved
func (d *Client) Create(ctx context.Context, options *CreateOptions) (CreateResponse, error) {
return CreateResponse{}, nil
lac, mac, httpHeaders, createOpts, cpkOpts := options.format()
resp, err := d.generatedDirClientWithDFS().Create(ctx, createOpts, httpHeaders, lac, mac, nil, cpkOpts)
err = exported.ConvertToDFSError(err)
return resp, err
}

// Delete removes the directory (dfs1).
// Delete deletes directory and any path under it (dfs1).
func (d *Client) Delete(ctx context.Context, options *DeleteOptions) (DeleteResponse, error) {
//TODO: pass recursive = true
return DeleteResponse{}, nil
lac, mac, deleteOpts := path.FormatDeleteOptions(options, true)
resp, err := d.generatedDirClientWithDFS().Delete(ctx, deleteOpts, lac, mac)
err = exported.ConvertToDFSError(err)
return resp, err
}

// GetProperties returns the properties of the directory (blob3). #TODO: we may just move this method to path client
// GetProperties gets the properties of a file (blob3)
tasherif-msft marked this conversation as resolved.
Show resolved Hide resolved
func (d *Client) GetProperties(ctx context.Context, options *GetPropertiesOptions) (GetPropertiesResponse, error) {
// TODO: format blob response to path response
return GetPropertiesResponse{}, nil
opts := path.FormatGetPropertiesOptions(options)
var respFromCtx *http.Response
ctxWithResp := runtime.WithCaptureResponse(ctx, &respFromCtx)
resp, err := d.blobClient().GetProperties(ctxWithResp, opts)
newResp := path.FormatGetPropertiesResponse(&resp, respFromCtx)
err = exported.ConvertToDFSError(err)
return newResp, err
}

func (d *Client) renamePathInURL(newName string) (string, string, string) {
tasherif-msft marked this conversation as resolved.
Show resolved Hide resolved
endpoint := d.DFSURL()
separator := "/"
// Find the index of the last occurrence of the separator
lastIndex := strings.LastIndex(endpoint, separator)
// Split the string based on the last occurrence of the separator
firstPart := endpoint[:lastIndex] // From the beginning of the string to the last occurrence of the separator
newPathURL, newBlobURL := shared.GetURLs(runtime.JoinPaths(firstPart, newName))
parsedNewURL, _ := url.Parse(d.DFSURL())
return parsedNewURL.Path, newPathURL, newBlobURL
}

// Rename renames the directory (dfs1).
// Rename renames a file (dfs1)
tasherif-msft marked this conversation as resolved.
Show resolved Hide resolved
func (d *Client) Rename(ctx context.Context, newName string, options *RenameOptions) (RenameResponse, error) {
return RenameResponse{}, nil
newPathWithoutURL, newBlobURL, newPathURL := d.renamePathInURL(newName)
lac, mac, smac, createOpts := path.FormatRenameOptions(options, newPathWithoutURL)
var newBlobClient *blockblob.Client
var err error
if d.identityCredential() != nil {
newBlobClient, err = blockblob.NewClient(newBlobURL, *d.identityCredential(), nil)
} else if d.sharedKey() != nil {
blobSharedKey, _ := d.sharedKey().ConvertToBlobSharedKey()
newBlobClient, err = blockblob.NewClientWithSharedKeyCredential(newBlobURL, blobSharedKey, nil)
} else {
newBlobClient, err = blockblob.NewClientWithNoCredential(newBlobURL, nil)
}
if err != nil {
return RenameResponse{}, err
}
newDirClient := (*Client)(base.NewPathClient(newPathURL, newBlobURL, newBlobClient, d.generatedDirClientWithDFS().InternalClient().WithClientName(shared.DirectoryClient), d.sharedKey(), d.identityCredential(), d.getClientOptions()))
resp, err := newDirClient.generatedDirClientWithDFS().Create(ctx, createOpts, nil, lac, mac, smac, nil)
return RenameResponse{
Response: resp,
NewDirectoryClient: newDirClient,
}, exported.ConvertToDFSError(err)
}

// SetAccessControl sets the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) SetAccessControl(ctx context.Context, options *SetAccessControlOptions) (SetAccessControlResponse, error) {
return SetAccessControlResponse{}, nil
opts, lac, mac, err := path.FormatSetAccessControlOptions(options)
if err != nil {
return SetAccessControlResponse{}, err
}
resp, err := d.generatedDirClientWithDFS().SetAccessControl(ctx, opts, lac, mac)
err = exported.ConvertToDFSError(err)
return resp, err
}

// SetAccessControlRecursive sets the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) SetAccessControlRecursive(ctx context.Context, options *SetAccessControlRecursiveOptions) (SetAccessControlRecursiveResponse, error) {
// TODO explicitly pass SetAccessControlRecursiveMode
return SetAccessControlRecursiveResponse{}, nil
func (d *Client) setAccessControlHelper(mode generated.PathSetAccessControlRecursiveMode, listOptions *generated.PathClientSetAccessControlRecursiveOptions) *runtime.Pager[SetAccessControlRecursiveResponse] {
return runtime.NewPager(runtime.PagingHandler[SetAccessControlRecursiveResponse]{
More: func(page SetAccessControlRecursiveResponse) bool {
return page.Continuation != nil && len(*page.Continuation) > 0
},
Fetcher: func(ctx context.Context, page *SetAccessControlRecursiveResponse) (SetAccessControlRecursiveResponse, error) {
var req *policy.Request
var err error
if page == nil {
req, err = d.generatedDirClientWithDFS().SetAccessControlRecursiveCreateRequest(ctx, mode, listOptions)
err = exported.ConvertToDFSError(err)
} else {
listOptions.Continuation = page.Continuation
req, err = d.generatedDirClientWithDFS().SetAccessControlRecursiveCreateRequest(ctx, mode, listOptions)
err = exported.ConvertToDFSError(err)
}
if err != nil {
return SetAccessControlRecursiveResponse{}, err
}
resp, err := d.generatedDirClientWithDFS().InternalClient().Pipeline().Do(req)
err = exported.ConvertToDFSError(err)
if err != nil {
return SetAccessControlRecursiveResponse{}, err
}
if !runtime.HasStatusCode(resp, http.StatusOK) {
return SetAccessControlRecursiveResponse{}, runtime.NewResponseError(resp)
}
newResp, err := d.generatedDirClientWithDFS().SetAccessControlRecursiveHandleResponse(resp)
return newResp, exported.ConvertToDFSError(err)
},
})

}

// UpdateAccessControlRecursive updates the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) UpdateAccessControlRecursive(ctx context.Context, options *UpdateAccessControlRecursiveOptions) (UpdateAccessControlRecursiveResponse, error) {
// TODO explicitly pass SetAccessControlRecursiveMode
return SetAccessControlRecursiveResponse{}, nil
// NewSetAccessControlRecursivePager sets the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) NewSetAccessControlRecursivePager(ACL string, options *SetAccessControlRecursiveOptions) *runtime.Pager[SetAccessControlRecursiveResponse] {
mode, listOptions := options.format(ACL, "set")
return d.setAccessControlHelper(mode, listOptions)
}

// GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) {
return GetAccessControlResponse{}, nil
// NewUpdateAccessControlRecursivePager updates the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) NewUpdateAccessControlRecursivePager(ACL string, options *UpdateAccessControlRecursiveOptions) *runtime.Pager[UpdateAccessControlRecursiveResponse] {
mode, listOptions := options.format(ACL, "modify")
return d.setAccessControlHelper(mode, listOptions)
}

// NewRemoveAccessControlRecursivePager removes the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) NewRemoveAccessControlRecursivePager(ACL string, options *RemoveAccessControlRecursiveOptions) *runtime.Pager[RemoveAccessControlRecursiveResponse] {
mode, listOptions := options.format(ACL, "remove")
return d.setAccessControlHelper(mode, listOptions)
}

// RemoveAccessControlRecursive removes the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) RemoveAccessControlRecursive(ctx context.Context, options *RemoveAccessControlRecursiveOptions) (RemoveAccessControlRecursiveResponse, error) {
// TODO explicitly pass SetAccessControlRecursiveMode
return SetAccessControlRecursiveResponse{}, nil
// GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1).
func (d *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) {
opts, lac, mac := path.FormatGetAccessControlOptions(options)
resp, err := d.generatedDirClientWithDFS().GetProperties(ctx, opts, lac, mac)
err = exported.ConvertToDFSError(err)
return resp, err
}

// SetMetadata sets the metadata for a file or directory (blob3).
func (d *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) {
// TODO: call directly into blob
return SetMetadataResponse{}, nil
opts, metadata := path.FormatSetMetadataOptions(options)
resp, err := d.blobClient().SetMetadata(ctx, metadata, opts)
err = exported.ConvertToDFSError(err)
return resp, err
}

// SetHTTPHeaders sets the HTTP headers for a file or directory (blob3).
func (d *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, options *SetHTTPHeadersOptions) (SetHTTPHeadersResponse, error) {
// TODO: call formatBlobHTTPHeaders() since we want to add the blob prefix to our options before calling into blob
// TODO: call into blob
return SetHTTPHeadersResponse{}, nil
opts, blobHTTPHeaders := path.FormatSetHTTPHeadersOptions(options, httpHeaders)
resp, err := d.blobClient().SetHTTPHeaders(ctx, blobHTTPHeaders, opts)
newResp := SetHTTPHeadersResponse{}
path.FormatSetHTTPHeadersResponse(&newResp, &resp)
err = exported.ConvertToDFSError(err)
return newResp, err
}

// GetSASURL is a convenience method for generating a SAS token for the currently pointed at blob.
// It can only be used if the credential supplied during creation was a SharedKeyCredential.
func (f *Client) GetSASURL(permissions sas.DirectoryPermissions, expiry time.Time, o *GetSASURLOptions) (string, error) {
if f.sharedKey() == nil {
return "", datalakeerror.MissingSharedKeyCredential
}

urlParts, err := sas.ParseURL(f.BlobURL())
err = exported.ConvertToDFSError(err)
if err != nil {
return "", err
}

st := path.FormatGetSASURLOptions(o)

qps, err := sas.DatalakeSignatureValues{
DirectoryPath: urlParts.PathName,
FilesystemName: urlParts.FilesystemName,
Version: sas.Version,
Permissions: permissions.String(),
StartTime: st,
ExpiryTime: expiry.UTC(),
}.SignWithSharedKey(f.sharedKey())

err = exported.ConvertToDFSError(err)
if err != nil {
return "", err
}

endpoint := f.BlobURL() + "?" + qps.Encode()

return endpoint, nil
}
Loading