Skip to content

Commit

Permalink
Merge branch '7.5' of github.com:elastic/kibana into backport/7.5/pr-…
Browse files Browse the repository at this point in the history
…52834
  • Loading branch information
Aaron Caldwell committed Jan 9, 2020
2 parents 793adf6 + 494ebf1 commit 3ab1490
Show file tree
Hide file tree
Showing 7 changed files with 127 additions and 65 deletions.
7 changes: 0 additions & 7 deletions docs/settings/monitoring-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,13 +54,6 @@ Specifies the password that {kib} uses for authentication when it retrieves data
from the monitoring cluster. If not set, {kib} uses the value of the
`elasticsearch.password` setting.

`telemetry.enabled`::
Set to `true` (default) to send cluster statistics to Elastic. Reporting your
cluster statistics helps us improve your user experience. Your data is never
shared with anyone. Set to `false` to disable statistics reporting from any
browser connected to the {kib} instance. You can also opt out through the
*Advanced Settings* in {kib}.

`xpack.monitoring.elasticsearch.pingTimeout`::
Specifies the time in milliseconds to wait for {es} to respond to internal
health checks. By default, it matches the `elasticsearch.pingTimeout` setting,
Expand Down
5 changes: 5 additions & 0 deletions docs/setup/settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -356,6 +356,11 @@ cannot be `false` at the same time.
To enable telemetry and prevent users from disabling it,
set `telemetry.allowChangingOptInStatus` to `false` and `telemetry.optIn` to `true`.

`telemetry.enabled`:: *Default: true* Reporting your cluster statistics helps
us improve your user experience. Your data is never shared with anyone. Set to
`false` to disable telemetry capabilities entirely. You can alternatively opt
out through the *Advanced Settings* in {kib}.

`vega.enableExternalUrls:`:: *Default: false* Set this value to true to allow Vega to use any URL to access external data sources and images. If false, Vega can only get data from Elasticsearch.

`xpack.license_management.enabled`:: *Default: true* Set this value to false to
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
65 changes: 35 additions & 30 deletions docs/spaces/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
[[xpack-spaces]]
== Spaces

Spaces enable you to organize your dashboards and other saved
objects into meaningful categories. Once inside a space, you see only
the dashboards and saved objects that belong to that space.
Spaces enable you to organize your dashboards and other saved
objects into meaningful categories. Once inside a space, you see only
the dashboards and saved objects that belong to that space.

{kib} creates a default space for you.
After you create your own
spaces, you're asked to choose a space when you log in to Kibana. You can change your
{kib} creates a default space for you.
After you create your own
spaces, you're asked to choose a space when you log in to Kibana. You can change your
current space at any time by using the menu in the upper left.

[role="screenshot"]
Expand All @@ -29,24 +29,24 @@ Kibana supports spaces in several ways. You can:
[[spaces-managing]]
=== View, create, and delete spaces

Go to **Management > Spaces** for an overview of your spaces. This view provides actions
Go to **Management > Spaces** for an overview of your spaces. This view provides actions
for you to create, edit, and delete spaces.

[role="screenshot"]
image::spaces/images/space-management.png["Space management"]

[float]
==== Create or edit a space
==== Create or edit a space

You can create as many spaces as you like. Click *Create a space* and provide a name,
URL identifier, optional description.
You can create as many spaces as you like. Click *Create a space* and provide a name,
URL identifier, optional description.

The URL identifier is a short text string that becomes part of the
{kib} URL when you are inside that space. {kib} suggests a URL identifier based
The URL identifier is a short text string that becomes part of the
{kib} URL when you are inside that space. {kib} suggests a URL identifier based
on the name of your space, but you can customize the identifier to your liking.
You cannot change the space identifier once you create the space.

{kib} also has an <<spaces-api, API>>
{kib} also has an <<spaces-api, API>>
if you prefer to create spaces programatically.

[role="screenshot"]
Expand All @@ -55,22 +55,22 @@ image::spaces/images/edit-space.png["Space management"]
[float]
==== Delete a space

Deleting a space permanently removes the space and all of its contents.
Deleting a space permanently removes the space and all of its contents.
Find the space on the *Spaces* overview page and click the trash icon in the Actions column.
You can't delete the default space, but you can customize it to your liking.

[float]
[[spaces-control-feature-visibility]]
=== Control feature access based on user needs

You have control over which features are visible in each space.
For example, you might hide Dev Tools
You have control over which features are visible in each space.
For example, you might hide Dev Tools
in your "Executive" space or show Stack Monitoring only in your "Admin" space.
You can define which features to show or hide when you add or edit a space.

Controlling feature
visibility is not a security feature. To secure access
to specific features on a per-user basis, you must configure
Controlling feature
visibility is not a security feature. To secure access
to specific features on a per-user basis, you must configure
<<xpack-security-authorization, Kibana Security>>.

[role="screenshot"]
Expand All @@ -80,10 +80,10 @@ image::spaces/images/edit-space-feature-visibility.png["Controlling features vis
[[spaces-control-user-access]]
=== Control feature access based on user privileges

When using Kibana with security, you can configure applications and features
based on your users’ privileges. This means different roles can have access
to different features in the same space.
Power users might have privileges to create and edit visualizations and dashboards,
When using Kibana with security, you can configure applications and features
based on your users’ privileges. This means different roles can have access
to different features in the same space.
Power users might have privileges to create and edit visualizations and dashboards,
while analysts or executives might have Dashboard and Canvas with read-only privileges.
See <<adding_kibana_privileges>> for details.

Expand All @@ -106,7 +106,7 @@ interface.
. Import your saved objects.
. (Optional) Delete objects in the export space that you no longer need.

{kib} also has beta <<saved-objects-api-import, import>> and
{kib} also has beta <<saved-objects-api-import, import>> and
<<saved-objects-api-export, export>> APIs if you want to automate this process.

[float]
Expand All @@ -115,17 +115,22 @@ interface.

You can create a custom experience for users by configuring the {kib} landing page on a per-space basis.
The landing page can route users to a specific dashboard, application, or saved object as they enter each space.
To configure the landing page, use the `defaultRoute` setting in <<kibana-general-settings, Management > Advanced settings>>.

To configure the landing page, use the default route setting in <<kibana-general-settings, Management > Advanced settings>>.
For example, you might set the default route to `/app/kibana#/dashboards`.

[role="screenshot"]
image::spaces/images/spaces-configure-landing-page.png["Configure space-level landing page"]


[float]
[[spaces-delete-started]]
=== Disable and version updates

Spaces are automatically enabled in {kib}. If you don't want use this feature,
Spaces are automatically enabled in {kib}. If you don't want use this feature,
you can disable it
by setting `xpack.spaces.enabled` to `false` in your
by setting `xpack.spaces.enabled` to `false` in your
`kibana.yml` configuration file.

If you are upgrading your
version of {kib}, the default space will contain all of your existing saved objects.

If you are upgrading your
version of {kib}, the default space will contain all of your existing saved objects.
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import {
setHttpClient,
TelemetryOptInProvider,
} from './lib/telemetry';
import { npStart } from 'ui/new_platform';
import { I18nContext } from 'ui/i18n';
import chrome from 'ui/chrome';

Expand Down Expand Up @@ -63,7 +64,7 @@ const manageAngularLifecycle = ($scope, $route, elem) => {
});
};
const initializeTelemetry = $injector => {
const telemetryEnabled = $injector.get('telemetryEnabled');
const telemetryEnabled = npStart.core.injectedMetadata.getInjectedVar('telemetryEnabled');
const Private = $injector.get('Private');
const telemetryOptInProvider = Private(TelemetryOptInProvider);
setTelemetryOptInService(telemetryOptInProvider);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ describe('BulkUploader', () => {
}, CHECK_DELAY);
});

it('refetches UsageCollectors if uploading to local cluster was not successful', done => {
it('stops refetching UsageCollectors if uploading to local cluster was not successful', async () => {
const usageCollectorFetch = sinon
.stub()
.returns({ type: 'type_usage_collector_test', result: { testData: 12345 } });
Expand All @@ -227,12 +227,52 @@ describe('BulkUploader', () => {

uploader._onPayload = async () => ({ took: 0, ignored: true, errors: false });

uploader.start(collectors);
setTimeout(() => {
uploader.stop();
expect(usageCollectorFetch.callCount).to.be.greaterThan(1);
done();
}, CHECK_DELAY);
await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));
await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));
await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));

expect(uploader._holdSendingUsage).to.eql(true);
expect(usageCollectorFetch.callCount).to.eql(1);
});

it('fetches UsageCollectors once uploading to local cluster is successful again', async () => {
const usageCollectorFetch = sinon
.stub()
.returns({ type: 'type_usage_collector_test', result: { usageData: 12345 } });

const statsCollectorFetch = sinon
.stub()
.returns({ type: 'type_stats_collector_test', result: { statsData: 12345 } });

const collectors = new MockCollectorSet(server, [
{
fetch: statsCollectorFetch,
isReady: () => true,
formatForBulkUpload: result => result,
isUsageCollector: false,
},
{
fetch: usageCollectorFetch,
isReady: () => true,
formatForBulkUpload: result => result,
isUsageCollector: true,
},
]);

const uploader = new BulkUploader({ ...server, interval: FETCH_INTERVAL });
let bulkIgnored = true;
uploader._onPayload = async () => ({ took: 0, ignored: bulkIgnored, errors: false });

await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));
expect(uploader._holdSendingUsage).to.eql(true);

bulkIgnored = false;
await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));
await uploader._fetchAndUpload(uploader.filterCollectorSet(collectors));

expect(uploader._holdSendingUsage).to.eql(false);
expect(usageCollectorFetch.callCount).to.eql(2);
expect(statsCollectorFetch.callCount).to.eql(3);
});

it('calls UsageCollectors if last reported exceeds during a _usageInterval', done => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,14 @@ export class BulkUploader {
}

this._timer = null;
// Hold sending and fetching usage until monitoring.bulk is successful. This means that we
// send usage data on the second tick. But would save a lot of bandwidth fetching usage on
// every tick when ES is failing or monitoring is disabled.
this._holdSendingUsage = false;
this._interval = interval;
this._lastFetchUsageTime = null;
// Limit sending and fetching usage to once per day once usage is successfully stored
// into the monitoring indices.
this._usageInterval = TELEMETRY_COLLECTION_INTERVAL;

this._log = {
Expand All @@ -65,38 +71,45 @@ export class BulkUploader {
});
}

filterCollectorSet(usageCollection) {
const successfulUploadInLastDay =
this._lastFetchUsageTime && this._lastFetchUsageTime + this._usageInterval > Date.now();

return usageCollection.getFilteredCollectorSet(c => {
// this is internal bulk upload, so filter out API-only collectors
if (c.ignoreForInternalUploader) {
return false;
}
// Only collect usage data at the same interval as telemetry would (default to once a day)
if (usageCollection.isUsageCollector(c)) {
if (this._holdSendingUsage) {
return false;
}
if (successfulUploadInLastDay) {
return false;
}
}

return true;
});
}

/*
* Start the interval timer
* @param {CollectorSet} collectorSet object to use for initial the fetch/upload and fetch/uploading on interval
* @return undefined
*/
start(collectorSet) {
this._log.info('Starting monitoring stats collection');
const filterCollectorSet = _collectorSet => {
const successfulUploadInLastDay =
this._lastFetchUsageTime && this._lastFetchUsageTime + this._usageInterval > Date.now();

return _collectorSet.getFilteredCollectorSet(c => {
// this is internal bulk upload, so filter out API-only collectors
if (c.ignoreForInternalUploader) {
return false;
}
// Only collect usage data at the same interval as telemetry would (default to once a day)
if (successfulUploadInLastDay && _collectorSet.isUsageCollector(c)) {
return false;
}
return true;
});
};

if (this._timer) {
clearInterval(this._timer);
} else {
this._fetchAndUpload(filterCollectorSet(collectorSet)); // initial fetch
this._fetchAndUpload(this.filterCollectorSet(collectorSet)); // initial fetch
}

this._timer = setInterval(() => {
this._fetchAndUpload(filterCollectorSet(collectorSet));
this._fetchAndUpload(this.filterCollectorSet(collectorSet));
}, this._interval);
}

Expand Down Expand Up @@ -146,12 +159,17 @@ export class BulkUploader {
const sendSuccessful = !result.ignored && !result.errors;
if (!sendSuccessful && hasUsageCollectors) {
this._lastFetchUsageTime = null;
this._holdSendingUsage = true;
this._log.debug(
'Resetting lastFetchWithUsage because uploading to the cluster was not successful.'
);
}
if (sendSuccessful && hasUsageCollectors) {
this._lastFetchUsageTime = Date.now();

if (sendSuccessful) {
this._holdSendingUsage = false;
if (hasUsageCollectors) {
this._lastFetchUsageTime = Date.now();
}
}
this._log.debug(`Uploaded bulk stats payload to the local cluster`);
} catch (err) {
Expand Down

0 comments on commit 3ab1490

Please sign in to comment.