This plugin allows to send SFTPGo filesystem and provider events to publish/subscribe systems. It is not meant to react to pre-*
events. It simply forwards the configured events to an external pub/sub system.
The supported services can be configured within the plugins
section of the SFTPGo configuration file: you have to set the service publish URL as first plugin argument. You can also set an, optional, instance id by passing a second argument. The instance id is useful if you are receiving notifications from multiple SFTPGo instances and want to identify where the events are coming from.
This is an example configuration.
...
"plugins": [
{
"type": "notifier",
"notifier_options": {
"fs_events": [
"download",
"upload"
],
"provider_events": [
"add",
"delete"
],
"provider_objects": [
"user",
"admin",
"api_key"
],
"retry_max_time": 60,
"retry_queue_max_size": 1000
},
"cmd": "<path to sftpgo-plugin-pubsub>",
"args": ["rabbit://sftpgo"],
"sha256sum": "",
"auto_mtls": true
}
]
...
With the above example the plugin is configured to connect to RabbitMQ and publish messages to the sftpgo
fanout exchange. The RabbitMQ’s server is discovered from the RABBIT_SERVER_URL
environment variable (which is something like amqp://guest:guest@localhost:5672/
).
The plugin will not start if it fails to connect to the configured service, this will prevent SFTPGo from starting.
The plugin will panic if an error occurs while publishing an event, for example because the service connection is lost. SFTPGo will automatically restart it and you can configure SFTPGo to retry failed events until they are older than a configurable time (60 seconds in the above example). This way no event is lost.
The filesystem events will contain a JSON serialized struct in the message body with the following fields:
timestamp
, string formatted as RFC3339 with nanoseconds precisionaction
, string, an SFTPGo supported actionusername
fs_path
, string filesystem pathfs_target_path
, string, included forrename
action andsftpgo-copy
SSH commandvirtual_path
, string, path seen by SFTPGo usersvirtual_target_path
, string, target path seen by SFTPGo usersssh_cmd
, string, included forssh_cmd
actionfile_size
, integer, included forpre-upload
,upload
,download
,delete
actions if the file size is greater than0
elapsed
, integer. Elapsed time as millisecondsstatus
, integer. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded errorprotocol
, string. Possible values areSSH
,SFTP
,SCP
,FTP
,DAV
,HTTP
,HTTPShare
,DataRetention
,OIDC
,EventAction
ip
, string. The action was executed from this IP addresssession_id
, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each requestfs_provider
, int. Filesystem provider0
for local filesystem,1
for S3 backend,2
for Google Cloud Storage (GCS) backend,3
for Azure Blob Storage backend,4
for local encrypted backend,5
for SFTP backendbucket
, string. Non-empty for S3, GCS and Azure backendsendpoint
, string. Non-empty for S3, SFTP and Azure backend if configuredopen_flags
, integer. File open flags, can be non-zero forpre-upload
action. Iffile_size
is greater than zero andopen_flags&512 == 0
the target file will not be truncatedrole
, string. Included if the user who executed the action has a rolemetadata
, structinstance_id
, string. Included if you pass an instance id as the second CLI parameter
The action
is also added as metadata.
The provider events will contain a JSON serialized struct in the message body with the following fields:
timestamp
, string formatted as RFC3339 with nanoseconds precisionaction
, string, an SFTPGo supported actionusername
, string, the username that executed the action. There are two special usernames:__self__
identifies a user/admin that updates itself and__system__
identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial dataip
, string. The action was executed from this IP addressobject_type
, string. Afftected object type, for exampleuser
,admin
,api_key
object_name
, string. Unique identifier for the affected object, for example username or key idobject_data
, base64 of the JSON serialized object with sensitive fields removedrole
, string. Included if the admin/user who executed the action has a roleinstance_id
, string. Included if you pass an instance id as the second CLI parameter
The action
and the object_type
are also added as metadata.
The log events will contain a JSON serialized struct in the message body with the following fields:
timestamp
, string formatted as RFC3339 with nanoseconds precisionevent
, integer, a supported log event typeprotocol
, string. Possible values areSSH
,SFTP
,SCP
,FTP
,DAV
,HTTP
,HTTPShare
,DataRetention
,OIDC
,EventAction
username
, stringip
, stringmessage
, string. Log messagerole
, string. Included if the admin/user who executed the action has a roleinstance_id
, string. Included if you pass an instance id as the second CLI parameter
The metadata action
is set to log
and event
contains the string representation of the log event
.
We use Go CDK to access several publish/subscribe systems in a portable way.
To publish events to a Google Cloud Pub/Sub topic, you have to use gcppubsub
as URL scheme and you have to include the project ID and the topic ID within the URL, for example gcppubsub://projects/myproject/topics/mytopic
.
This plugin will use Application Default Credentials. See here for alternatives such as environment variables.
To publish events to an Amazon Simple Notification Service (SNS) topic, you have to use awssns
as URL scheme. The topic is identified via the Amazon Resource Name (ARN). You should specify the region query parameter to ensure your application connects to the correct region.
Here is a sample URL: awssns:///arn:aws:sns:us-east-2:123456789012:mytopic?region=us-east-2
.
The plugin will create a default AWS Session with the SharedConfigEnable
option enabled. See AWS Session to learn about authentication alternatives, including using environment variables.
To publish events to an Amazon Simple Queue Service (SQS) topic, you have to use an URL that closely resemble the queue URL, except the leading https://
is replaced with awssqs://
. You can specify the region query parameter to ensure your application connects to the correct region.
Here is a sample URL: awssqs://sqs.us-east-2.amazonaws.com/123456789012/myqueue?region=us-east-2
.
To publish to an Azure Service Bus topic over AMQP 1.0, you have to use azuresb
as URL scheme and the topic name as URL host.
Here is a sample URL: azuresb://mytopic
.
We use the environment variable SERVICEBUS_CONNECTION_STRING
to obtain the Service Bus connection string. The connection string can be obtained from the Azure portal.
To publish events to an AMQP 0.9.1 fanout exchange, the dialect of AMQP spoken by RabbitMQ, you have to use rabbit
as URL scheme and the exchange name as URL host.
Here is a sample URL: rabbit://myexchange
.
The RabbitMQ’s server is discovered from the RABBIT_SERVER_URL
environment variable (which is something like amqp://guest:guest@localhost:5672/
).
To publish events to a NATS subject, you have to use nats
as URL scheme and the subject name as URL host.
Here is a sample URL: nats://example.mysubject
.
The NATS server is discovered from the NATS_SERVER_URL
environment variable (which is something like nats://nats.example.com
).
To publish events to a Kafka cluster, you have to use kafka
as URL scheme and the topic name as URL host.
Here is a sample URL: kafka://my-topic
.
The brokers in the Kafka cluster are discovered from the KAFKA_BROKERS
environment variable (which is a comma-delimited list of hosts, something like 1.2.3.4:9092,5.6.7.8:9092
).