Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update api docs #103310

Merged
merged 4 commits into from
Jun 29, 2021
Merged

Conversation

stacey-gammon
Copy link
Contributor

@stacey-gammon stacey-gammon commented Jun 24, 2021

  • Pulled master
  • ran yarn kbn bootstrap
  • ran node scripts/build_api_docs

The diff is so high primarily due to timeline and security solution plugins greatly increasing their public APIs recently

Screen Shot 2021-06-24 at 11 34 31 AM

Screen Shot 2021-06-24 at 11 34 25 AM

@stacey-gammon stacey-gammon added release_note:skip Skip the PR/issue when compiling release notes v7.14.0 v8.0.0 labels Jun 24, 2021
@stacey-gammon stacey-gammon requested a review from spalger June 24, 2021 17:29
@spalger
Copy link
Contributor

spalger commented Jun 24, 2021

@XavierM How long do you expect #100265 to result in this massive duplication? Is this something that's going to be fixed in the next couple days?

@XavierM
Copy link
Contributor

XavierM commented Jun 29, 2021

@XavierM How long do you expect #100265 to result in this massive duplication? Is this something that's going to be fixed in the next couple days?

As right now, we are having a feature flag and we are not using the timelines UI. Will it be possible to disable the doc on the timelines side for now until the team removed all the duplication? Since merged our direction change to not use the timeline grid but only using the EuiDatagrid for the alerts table. We just need time to get it done.

@stacey-gammon
Copy link
Contributor Author

If you use the @internal tag, the apis will be stripped from the docs, though I'm guessing adding that to every API would be a lot of work.

I could manually delete the file (though I'm not sure exactly which one you would want me to delete), but next time I run this command chances are I'll forget to manually delete it and it'll get merged back in.

I'm inclined to just merge as is, unless you are concerned about the amount of data being stored in the repo, @spalger ?

@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / general / X-Pack Security API Integration Tests (Session Idle Timeout).x-pack/test/security_api_integration/tests/session_idle/cleanup·ts.security APIs - Session Idle Session Idle cleanup should not clean up session if user is active

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: security APIs - Session Idle
[00:00:00]           └-> "before all" hook in "security APIs - Session Idle"
[00:00:00]           └-: Session Idle cleanup
[00:00:00]             └-> "before all" hook for "should properly clean up session expired because of idle timeout"
[00:00:00]             └-> should properly clean up session expired because of idle timeout
[00:00:00]               └-> "before each" hook: global before each for "should properly clean up session expired because of idle timeout"
[00:00:00]               └-> "before each" hook for "should properly clean up session expired because of idle timeout"
[00:00:00]                 │ debg Deleting indices [attempt=1] [pattern=.kibana_security_session*] ".kibana_security_session_1"
[00:00:00]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_security_session_1/uavXsB5ETy6eFWzKs7VSbg] deleting index
[00:00:00]               │ proc [kibana]   log   [15:00:11.997] [info][plugins][routes][security] Logging in with provider "basic1" (basic)
[00:00:00]               │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_security_session_1] creating index, cause [auto(bulk api)], templates [.kibana_security_session_index_template_1], shards [1]/[0]
[00:00:00]               │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_security_session_1][0]]"
[00:00:00]               │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.ds-ilm-history-5-2021.06.29-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[00:00:00]               │ info [o.e.c.m.MetadataCreateDataStreamService] [node-01] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2021.06.29-000001] and backing indices []
[00:00:00]               │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-ilm-history-5-2021.06.29-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]               │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2021.06.29-000001][0]]])." previous.health="YELLOW" reason="shards started [[.ds-ilm-history-5-2021.06.29-000001][0]]"
[00:00:00]               │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-ilm-history-5-2021.06.29-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
[00:00:00]               │ info [o.e.x.i.IndexLifecycleTransition] [node-01] moving index [.ds-ilm-history-5-2021.06.29-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
[00:00:42]               └- ✓ pass  (42.1s) "security APIs - Session Idle Session Idle cleanup should properly clean up session expired because of idle timeout"
[00:00:42]             └-> should properly clean up session expired because of idle timeout when providers override global session config
[00:00:42]               └-> "before each" hook: global before each for "should properly clean up session expired because of idle timeout when providers override global session config"
[00:00:42]               └-> "before each" hook for "should properly clean up session expired because of idle timeout when providers override global session config"
[00:00:42]                 │ debg Deleting indices [attempt=1] [pattern=.kibana_security_session*] ".kibana_security_session_1"
[00:00:42]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_security_session_1/JiTOyJ2lTOiUc1YgC3MZZg] deleting index
[00:00:42]               │ proc [kibana]   log   [15:00:54.160] [info][plugins][routes][security] Logging in with provider "saml_disable" (saml)
[00:00:42]               │ proc [kibana]   log   [15:00:54.166] [info][plugins][routes][security] Logging in with provider "saml_override" (saml)
[00:00:42]               │ proc [kibana]   log   [15:00:54.171] [info][plugins][routes][security] Logging in with provider "saml_fallback" (saml)
[00:00:42]               │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_security_session_1] creating index, cause [auto(bulk api)], templates [.kibana_security_session_index_template_1], shards [1]/[0]
[00:00:42]               │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_security_session_1][0]]"
[00:00:43]               │ info [o.e.x.s.s.SecurityIndexManager] [node-01] security index does not exist, creating [.security-tokens-7] with alias [.security-tokens]
[00:00:43]               │ info [o.e.x.s.s.SecurityIndexManager] [node-01] security index does not exist, creating [.security-tokens-7] with alias [.security-tokens]
[00:00:43]               │ info [o.e.x.s.s.SecurityIndexManager] [node-01] security index does not exist, creating [.security-tokens-7] with alias [.security-tokens]
[00:00:43]               │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.security-tokens-7] creating index, cause [api], templates [], shards [1]/[0]
[00:00:43]               │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-tokens-7][0]]])." previous.health="YELLOW" reason="shards started [[.security-tokens-7][0]]"
[00:00:46]               │ proc [kibana]   log   [15:00:58.340] [info][plugins][routes][security] Logging in with provider "basic1" (basic)
[00:01:31]               └- ✓ pass  (49.2s) "security APIs - Session Idle Session Idle cleanup should properly clean up session expired because of idle timeout when providers override global session config"
[00:01:31]             └-> should not clean up session if user is active
[00:01:31]               └-> "before each" hook: global before each for "should not clean up session if user is active"
[00:01:31]               └-> "before each" hook for "should not clean up session if user is active"
[00:01:31]                 │ debg Deleting indices [attempt=1] [pattern=.kibana_security_session*] ".kibana_security_session_1"
[00:01:31]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_security_session_1/yf5N_5GgSOGeSsM03FF49A] deleting index
[00:01:31]               │ proc [kibana]   log   [15:01:43.451] [info][plugins][routes][security] Logging in with provider "basic1" (basic)
[00:01:31]               │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_security_session_1] creating index, cause [auto(bulk api)], templates [.kibana_security_session_index_template_1], shards [1]/[0]
[00:01:31]               │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_security_session_1][0]]"
[00:01:34]               │ debg Session is still valid after 1.5s
[00:01:35]               │ debg Session is still valid after 3s
[00:01:37]               │ debg Session is still valid after 4.5s
[00:01:38]               │ debg Session is still valid after 6s
[00:01:40]               │ debg Session is still valid after 7.5s
[00:01:42]               │ debg Session is still valid after 9s
[00:01:44]               │ debg Session is still valid after 10.5s
[00:01:45]               │ debg Session is still valid after 12s
[00:01:47]               │ debg Session is still valid after 13.5s
[00:01:48]               │ debg Session is still valid after 15s
[00:01:50]               │ debg Session is still valid after 16.5s
[00:01:51]               │ debg Session is still valid after 18s
[00:01:53]               │ debg Session is still valid after 19.5s
[00:01:55]               │ debg Session is still valid after 21s
[00:01:56]               │ proc [kibana]   log   [15:02:08.807] [warning][plugins][taskManager] Detected potential performance issue with Task Manager. Set 'xpack.task_manager.monitored_stats_health_verbose_log.enabled: true' in your Kibana.yml to enable debug logging
[00:02:01]               └- ✖ fail: security APIs - Session Idle Session Idle cleanup should not clean up session if user is active
[00:02:01]               │      Error: expected 200 "OK", got 401 "Unauthorized"
[00:02:01]               │       at Test._assertStatus (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:268:12)
[00:02:01]               │       at Test._assertFunction (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:283:11)
[00:02:01]               │       at Test.assert (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:173:18)
[00:02:01]               │       at assert (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:131:12)
[00:02:01]               │       at /dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:128:5
[00:02:01]               │       at Test.Request.callback (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
[00:02:01]               │       at /dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
[00:02:01]               │       at IncomingMessage.<anonymous> (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
[00:02:01]               │       at endReadableNT (internal/streams/readable.js:1336:12)
[00:02:01]               │       at processTicksAndRejections (internal/process/task_queues.js:82:21)
[00:02:01]               │ 
[00:02:01]               │ 

Stack Trace

Error: expected 200 "OK", got 401 "Unauthorized"
    at Test._assertStatus (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:268:12)
    at Test._assertFunction (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:283:11)
    at Test.assert (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:173:18)
    at assert (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:131:12)
    at /dev/shm/workspace/parallel/19/kibana/node_modules/supertest/lib/test.js:128:5
    at Test.Request.callback (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
    at /dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
    at IncomingMessage.<anonymous> (/dev/shm/workspace/parallel/19/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
    at endReadableNT (internal/streams/readable.js:1336:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21)

Metrics [docs]

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Copy link
Contributor

@spalger spalger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@XavierM correct me if I'm misunderstanding but it sounds like 300kb of code is being downloaded on each page load and then not used? Why hasn't that PR been reverted? Anyway, this isn't your fault @stacey-gammon LGTM

@stacey-gammon stacey-gammon added the auto-backport Deprecated - use backport:version if exact versions are needed label Jun 29, 2021
@stacey-gammon stacey-gammon merged commit 7ff2895 into elastic:master Jun 29, 2021
@kibanamachine
Copy link
Contributor

💔 Backport failed

Status Branch Result
7.x Commit could not be cherrypicked due to conflicts

To backport manually run:
node scripts/backport --pr 103310

stacey-gammon added a commit to stacey-gammon/kibana that referenced this pull request Jun 30, 2021
* Update api docs

* update api docs after merge from master
# Conflicts:
#	api_docs/core.json
#	api_docs/data.json
#	api_docs/data_index_patterns.json
#	api_docs/licensing.json
#	api_docs/saved_objects.json
stacey-gammon added a commit that referenced this pull request Jun 30, 2021
* Update api docs

* update api docs after merge from master
# Conflicts:
#	api_docs/core.json
#	api_docs/data.json
#	api_docs/data_index_patterns.json
#	api_docs/licensing.json
#	api_docs/saved_objects.json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed release_note:skip Skip the PR/issue when compiling release notes v7.14.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants