Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboards public links not working #1238

Closed
WesleyBatista opened this issue Aug 16, 2016 · 19 comments
Closed

Dashboards public links not working #1238

WesleyBatista opened this issue Aug 16, 2016 · 19 comments

Comments

@WesleyBatista
Copy link
Contributor

WesleyBatista commented Aug 16, 2016

Issue Summary

Hello guys,

Trying to share a dashboard I am getting this:
scheenshot

The log starting from the call:

[2016-08-16 22:21:55,103][PID:15393][INFO][metrics] method=GET path=/public/dashboards/4y6fsdfymPRy7hCOHe5OsD2RdmfqXEim3QOMN53QJRuPL1 endpoint=redash.public_dashboard status=500 content_type=? content_length=-1 duration=53.16 query_count=17 query_duration=13.12
[2016-08-16 22:21:55,751][PID:15393][INFO][metrics] method=GET path=/styles/main.515f0263.css endpoint=redash.send_static status=200 content_type=text/css; charset=utf-8 content_length=258555 duration=2.88 query_count=0 query_duration=0.00
[2016-08-16 22:21:55,751][PID:15393][INFO][metrics] method=GET path=/styles/main.515f0263.css endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.26 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,017][PID:15393][INFO][metrics] method=GET path=/scripts/plugins.3e90b1ef.js endpoint=redash.send_static status=200 content_type=application/javascript content_length=2517605 duration=2.84 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,018][PID:15393][INFO][metrics] method=GET path=/scripts/plugins.3e90b1ef.js endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.22 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,285][PID:15393][INFO][metrics] method=GET path=/scripts/scripts.f59202ee.js endpoint=redash.send_static status=200 content_type=application/javascript content_length=100183 duration=2.85 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,285][PID:15393][INFO][metrics] method=GET path=/scripts/scripts.f59202ee.js endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.24 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,407][PID:14741][INFO][metrics] method=GET path=/fonts/roboto/Roboto-Regular-webfont.woff endpoint=redash.send_static status=200 content_type=application/x-font-woff content_length=25020 duration=2.87 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,408][PID:14741][INFO][metrics] method=GET path=/fonts/roboto/Roboto-Regular-webfont.woff endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.25 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,695][PID:15881][INFO][metrics] method=GET path=/images/redash_icon_small.png endpoint=redash.send_static status=200 content_type=image/png content_length=3123 duration=2.85 query_count=0 query_duration=0.00
[2016-08-16 22:21:56,696][PID:15881][INFO][metrics] method=GET path=/images/redash_icon_small.png endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.23 query_count=0 query_duration=0.00
[2016-08-16 22:21:58,824][PID:15881][INFO][metrics] method=GET path=/views/dashboard.html endpoint=redash.send_static status=200 content_type=text/html; charset=utf-8 content_length=6620 duration=3.16 query_count=0 query_duration=0.00
[2016-08-16 22:21:58,825][PID:15881][INFO][metrics] method=GET path=/views/dashboard.html endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=4.61 query_count=0 query_duration=0.00
[2016-08-16 22:21:59,199][PID:15881][INFO][metrics] method=GET path=/fonts/roboto/Roboto-Medium-webfont.woff endpoint=redash.send_static status=200 content_type=application/x-font-woff content_length=25048 duration=2.98 query_count=0 query_duration=0.00
[2016-08-16 22:21:59,200][PID:15881][INFO][metrics] method=GET path=/fonts/roboto/Roboto-Medium-webfont.woff endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=4.16 query_count=0 query_duration=0.00
[2016-08-16 22:22:00,270][PID:15393][INFO][metrics] method=POST path=/api/events endpoint=events status=200 content_type=application/json content_length=4 duration=57.31 query_count=20 query_duration=14.60
[2016-08-16 22:22:00,270][PID:15393][INFO][metrics] method=POST path=/api/events endpoint=events status=500 content_type=? content_length=-1 duration=57.82 query_count=20 query_duration=14.60
[2016-08-16 22:22:21,759][PID:15393][INFO][metrics] method=GET path=/views/dashboard_share.html endpoint=redash.send_static status=200 content_type=text/html; charset=utf-8 content_length=637 duration=2.82 query_count=0 query_duration=0.00
[2016-08-16 22:22:21,760][PID:15393][INFO][metrics] method=GET path=/views/dashboard_share.html endpoint=redash.send_static status=500 content_type=? content_length=-1 duration=3.62 query_count=0 query_duration=0.00
[2016-08-16 22:22:26,373][PID:15881][INFO][metrics] method=DELETE path=/api/dashboards/369/share endpoint=dashboard_share status=200 content_type=application/json content_length=4 duration=12.85 query_count=7 query_duration=6.76
[2016-08-16 22:22:26,374][PID:15881][INFO][metrics] method=DELETE path=/api/dashboards/369/share endpoint=dashboard_share status=500 content_type=? content_length=-1 duration=13.67 query_count=7 query_duration=6.76
[2016-08-16 22:22:27,631][PID:15393][INFO][metrics] method=POST path=/api/dashboards/369/share endpoint=dashboard_share status=200 content_type=application/json content_length=179 duration=14.60 query_count=8 query_duration=8.08
[2016-08-16 22:22:27,632][PID:15393][INFO][metrics] method=POST path=/api/dashboards/369/share endpoint=dashboard_share status=500 content_type=? content_length=-1 duration=15.80 query_count=8 query_duration=8.08
[2016-08-16 22:22:36,241: ERROR/MainProcess] Task redash.tasks.record_event[e51bc501-4600-4174-960d-10783b7d13ae] raised unexpected: ValueError("invalid literal for int() with base 10: 'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf'",)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/opt/redash/current/redash/tasks/base.py", line 13, in __call__
    return super(BaseTask, self).__call__(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 437, in __protected_call__
    return self.run(*args, **kwargs)
  File "/opt/redash/current/redash/tasks/general.py", line 15, in record_event
    models.Event.record(event)
  File "/opt/redash/current/redash/models.py", line 1073, in record
    additional_properties=additional_properties, created_at=created_at)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 4001, in create
    inst.save(force_insert=True)
  File "/opt/redash/current/redash/models.py", line 99, in save
    super(BaseModel, self).save(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 4148, in save
    pk_from_cursor = self.insert(**field_dict).execute()
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2858, in execute
    cursor = self._execute()
  File "/opt/redash/current/redash/metrics/database.py", line 50, in metered_execute
    result = real_execute(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2370, in _execute
    sql, params = self.sql()
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2832, in sql
    return self.compiler().generate_insert(self)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1733, in generate_insert
    return self.build_query(clauses, alias_map)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1542, in build_query
    return self.parse_node(Clause(*clauses), alias_map)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
    sql, params, unknown = self._parse(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
    sql, params = self._parse_map[node_type](node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
    node.nodes, alias_map, conv, node.glue)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
    node_sql, node_params = self.parse_node(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
    sql, params, unknown = self._parse(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
    sql, params = self._parse_map[node_type](node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
    node.nodes, alias_map, conv, node.glue)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
    node_sql, node_params = self.parse_node(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
    sql, params, unknown = self._parse(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
    sql, params = self._parse_map[node_type](node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
    node.nodes, alias_map, conv, node.glue)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
    node_sql, node_params = self.parse_node(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
    sql, params, unknown = self._parse(node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
    sql, params = self._parse_map[node_type](node, alias_map, conv)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1394, in _parse_param
    params = [node.conv(node.value)]
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1187, in db_value
    return self.to_field.db_value(value)
  File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 787, in db_value
    return value if value is None else self.coerce(value)
ValueError: invalid literal for int() with base 10: 'N3xLsL8kCGyBk7y5yXHdzH5oMo1wvf5xrhJDakIHf'

I think that it is something related to events recording:

Task redash.tasks.record_event[e51bc501-4600-4174-960d-10783b7d13ae] raised unexpected: ValueError("invalid literal for int() with base 10: 'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf'",)

I also think that it is strange to get so much status=500 on the calls reported in the log.

I have tried to reproduced the steps on the demo instance and it works just fine.

Steps to Reproduce

  1. Create a dashboard
  2. Create the public link by clicking on 'enable/disable share' and checking the 'allow public access' checkbox

Technical details:

  • Redash Version: 0.11.0
  • Browser/OS: Chrome
  • How did you install Redash: docker-compose
@WesleyBatista
Copy link
Contributor Author

the result from celery command
bin/run celery --app=redash.worker result e51bc501-4600-4174-960d-10783b7d13ae:

root@c125c0578a12:/opt/redash/current# bin/run celery --app=redash.worker result e51bc501-4600-4174-960d-10783b7d13ae
sed: can't read .env: No such file or directory
[2016-08-17 00:11:37,266][PID:18281][INFO][root] Latest version: 0.11.1 (newer: True)
Traceback (most recent call last):
  File "/usr/local/bin/celery", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python2.7/dist-packages/celery/__main__.py", line 30, in main
    main()
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 81, in main
    cmd.execute_from_commandline(argv)
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 769, in execute_from_commandline
    super(CeleryCommand, self).execute_from_commandline(argv)))
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 306, in execute_from_commandline
    return self.handle_argv(self.prog_name, argv[1:])
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 761, in handle_argv
    return self.execute(command, argv)
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 693, in execute
    ).run_from_argv(self.prog_name, argv[1:], command=argv[0])
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 310, in run_from_argv
    sys.argv if argv is None else argv, command)
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 372, in handle_argv
    return self(*args, **options)
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 269, in __call__
    ret = self.run(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 259, in run
    value = result.get()
  File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 169, in get
    no_ack=no_ack,
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 215, in wait_for
    raise result
ValueError: invalid literal for int() with base 10: 'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf'

@arikfr
Copy link
Member

arikfr commented Aug 17, 2016

There are 3 issues here:

  1. The logs reporting error status code (500) requests: this is a bug. All of these requests are OK. It's something with how gunicorn closes the connections. I tried debugging it in the past, but didn't reach anything conclusive. I plan to rewrite how we do the logging to avoid these bogus errors.
  2. Issue with writing the event: I saw this in the logs occasionally but never managed to understand where it's coming from. Now that I know it's from shared dashboards will be easier to find what's up. Anyway, this is not preventing from the dashboard to load.
  3. Dashboard not loading. Can it be this issue: Shared dashboard crashes if one of the queries has no results yet #1194?

@WesleyBatista
Copy link
Contributor Author

Haven't do anything and now I am getting:

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

And I don't think it is related with #1194. All queries already have results.

I will try to investigate more in depth.

@WesleyBatista
Copy link
Contributor Author

Actually this Internal Server Error is because I used the older link (I generated a new one for the dashboard when was testing it).

I think that here we should respond with a 410 or 404 error...

@WesleyBatista
Copy link
Contributor Author

WesleyBatista commented Aug 18, 2016

I have just arrived at this line:
https://github.com/getredash/redash/blob/v0.11.1.b2095/redash/metrics/request.py#L37-L42

Locally, I added a line inside calculate_metrics_on_exception to see whats happening:

def calculate_metrics_on_exception(error):
    metrics_logger.error(error)
    if error is not None:
        calculate_metrics(MockResponse(500, '?', -1))

I got this:

[2016-08-18 17:12:25,789][PID:24][ERROR][metrics] [Errno 11] Resource temporarily unavailable
[2016-08-18 17:12:25,791][PID:24][INFO][metrics] method=POST path=/api/events endpoint=events status=501 content_type=? content_length=-1 duration=205.31 query_count=8 query_duration=56.99

@WesleyBatista
Copy link
Contributor Author

I think the last message is more related to the issue number 1.

Regarding the main issue, that is the widgets not displaying on public dashboards, my guess is:
In somewhere the code is expecting an id (which is int), but something is passing the dashboard token instead of the id.

WDYT @arikfr ?

@WesleyBatista
Copy link
Contributor Author

I tried to log the redash.tasks.record_event method, and look what I've got:

[2016-08-18 18:06:33,047][PID:60][INFO][root] {'user_id': u'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf', 'ip': 'xxx.xx.xxx.xxx', 'object_type': 'dashboard', 'org_id': 1, 'object_id': 369, 'headless': False, 'referer': None, 'user_agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36', 'timestamp': 1471543593, 'action': 'view', 'public': True}
[2016-08-18 18:06:33,062: ERROR/MainProcess] Task redash.tasks.record_event[0990d12f-7346-40f1-99a9-5d434be782b3] raised unexpected: ValueError("invalid literal for int() with base 10: 'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf'",)

@arikfr
Copy link
Member

arikfr commented Aug 18, 2016

Did you manage to understand why the dashboard doesn't load? Or it's loading now?

If it's not, can you check api_error.log and the browser console when loading the dashboard?

@arikfr
Copy link
Member

arikfr commented Aug 18, 2016

About the record_event issue: I have a feeling the problem is with this check. The user object is Flask's proxy object and might be returning False there instead of True.

It's just a feeling, as I didn't have the chance to debug it yet.

@arikfr
Copy link
Member

arikfr commented Aug 18, 2016

Metrics & Errno 11 -- I remember seeing this too, but I think later I noticed there were different errors in my test environment and production. But not sure.

@WesleyBatista
Copy link
Contributor Author

Yesterday I ended up with the following:

  1. we have a misundertuding in somewhere with ApiKey/ApiUser;
  2. the record_event seems not to be related with the main issue;

Somehow the current_user is being set with an ApiKey instance. The record_event is expecting an ApiUser [redash/handlers/base.py].

I added some 'logging' lines to the method above.

def record_event(org, user, options):

    logging.info('user')
    logging.info(user)
    logging.info('options')
    logging.info(options)
    logging.info('instance of ApiKey : {}'.format(isinstance(user, ApiKey)))
    logging.info('instance of user is: {}'.format(type(user)))

    if isinstance(user, ApiUser):
        options.update({
            'api_key': user.name,
            'org_id': org.id
        })

    #[...]

the logs:

[2016-08-19 13:55:12,276][PID:59][INFO][root] user
[2016-08-19 13:55:12,276][PID:59][INFO][root] <ApiKey: 10>
[2016-08-19 13:55:12,276][PID:59][INFO][root] options
[2016-08-19 13:55:12,276][PID:59][INFO][root] {'object_type': 'dashboard', 'object_id': 369, 'headless': False, 'referer': None, 'action': 'view', 'public': True}
[2016-08-19 13:55:12,276][PID:59][INFO][root] instance of ApiKey : False
[2016-08-19 13:55:12,276][PID:59][INFO][root] instance of user is: <class 'werkzeug.local.LocalProxy'>

After see that we were passing an ApiKey, I get confused with the last lines.
It was telling me that <ApiKey: 10> is not instance of ApiKey, it was actually <class 'werkzeug.local.LocalProxy'>, the Flask's proxy object as you mentioned.
So I added the following to test:

    elif isinstance(user, ApiKey):
        options.update({
            'api_key': user.api_key,
            'org_id': org.id
        })
    elif isinstance(user.id, unicode):
        options.update({
            'api_key': user.id,
            'org_id': org.id
        })

The user object do not enter at the first elif. that's why I added the second elif.

With a log line at the end of the method I could see that it worked.
It is not anymore trying to writing an 'user_id'.

[2016-08-19 13:55:12,277][PID:59][INFO][root] {'ip': 'xxxxxxx', 'object_type': 'dashboard', 'org_id': 1, 'object_id': 369, 'headless': False, 'referer': None, 'user_agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36', 'timestamp': 1471614912, 'action': 'view', 'api_key': u'N3xLsL8kCGyBk7y5yXHzH5oMo1wvf5xrhJDakIHf', 'public': True}

The final code:

def record_event(org, user, options):
    logging.info('user')
    logging.info(user)
    logging.info('options')
    logging.info(options)
    logging.info('instance of ApiKey : {}'.format(isinstance(user, ApiKey)))
    logging.info('instance of user is: {}'.format(type(user)))
    if isinstance(user, ApiUser):
        options.update({
            'api_key': user.name,
            'org_id': org.id
        })
    elif isinstance(user, ApiKey):
        options.update({
            'api_key': user.api_key,
            'org_id': org.id
        })
    elif isinstance(user.id, unicode):
        options.update({
            'api_key': user.id,
            'org_id': org.id
        })
    else:
        options.update({
            'user_id': user.id,
            'org_id': org.id
        })

    options.update({
        'user_agent': request.user_agent.string,
        'ip': request.remote_addr
    })

    if 'timestamp' not in options:
        options['timestamp'] = int(time.time())

    logging.info(options)

    record_event_task.delay(options)

The traceback is not raised anymore. No more issues with the record_event so far.
But I am still not able to see any widgets in the dashboard :(

@arikfr
Copy link
Member

arikfr commented Aug 19, 2016

But I am still not able to see any widgets in the dashboard :(

Of course -- the issue with record_event was never preventing the dashboard to load. Did you check api_error.log and the browser console when loading the dashboard?

@WesleyBatista
Copy link
Contributor Author

The browser console is raising an error:
captura de tela 2016-08-19 as 12 38 01

I already saw that, but started to investigate at the server (don't ask me why 😑 )

I will take a look after lunch.

@WesleyBatista
Copy link
Contributor Author

I use docker. to see the logs I run docker logs --since '2016-08-19' --tail=all -f redash.
Where is located the api_error.log? Before using the docker the logs were located at /opt/redash/logs.
There are no files there anymore.

@arikfr
Copy link
Member

arikfr commented Aug 19, 2016

When using Docker the logs are not saved to files but piped to stdout (and
that's what you see with docker logs).

Is any of the queries on this dashboard using parameters?

On Aug 19, 2016 6:54 PM, "Wesley" notifications@github.com wrote:

I use docker. to see the logs I run docker logs --since '2016-08-19'
--tail=all -f redash.
Where is located the api_error.log? Before using the docker the logs
were located at /opt/redash/logs.
There are no files there anymore.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1238 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEXLEcPfTnYaHzvbhvu0MhkKY_nhGM0ks5qhdHGgaJpZM4Jl73c
.

@WesleyBatista
Copy link
Contributor Author

in fact they are

Em Sex, 19 de ago de 2016 13:07, Arik Fraimovich notifications@github.com
escreveu:

When using Docker the logs are not saved to files but piped to stdout (and
that's what you see with docker logs).

Is any of the queries on this dashboard using parameters?

On Aug 19, 2016 6:54 PM, "Wesley" notifications@github.com wrote:

I use docker. to see the logs I run docker logs --since '2016-08-19'
--tail=all -f redash.
Where is located the api_error.log? Before using the docker the logs
were located at /opt/redash/logs.
There are no files there anymore.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1238 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAEXLEcPfTnYaHzvbhvu0MhkKY_nhGM0ks5qhdHGgaJpZM4Jl73c

.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#1238 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAuGNu49_SlYr2KfPUJMCUnShnJts2luks5qhdSwgaJpZM4Jl73c
.

@WesleyBatista
Copy link
Contributor Author

Actually, not sure right now. I know now that we are using filter or
multifilters.

Em Sex, 19 de ago de 2016 13:12, Wesley Batista <
wesley.batista.santos@gmail.com> escreveu:

in fact they are

Em Sex, 19 de ago de 2016 13:07, Arik Fraimovich notifications@github.com
escreveu:

When using Docker the logs are not saved to files but piped to stdout (and
that's what you see with docker logs).

Is any of the queries on this dashboard using parameters?

On Aug 19, 2016 6:54 PM, "Wesley" notifications@github.com wrote:

I use docker. to see the logs I run docker logs --since '2016-08-19'
--tail=all -f redash.
Where is located the api_error.log? Before using the docker the logs
were located at /opt/redash/logs.
There are no files there anymore.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1238 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AAEXLEcPfTnYaHzvbhvu0MhkKY_nhGM0ks5qhdHGgaJpZM4Jl73c

.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#1238 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAuGNu49_SlYr2KfPUJMCUnShnJts2luks5qhdSwgaJpZM4Jl73c
.

@WesleyBatista
Copy link
Contributor Author

I give up on trying to understand... I have updated to the latest version and it is fixed.
I think we can close this issue.

@arikfr
Copy link
Member

arikfr commented Aug 22, 2016

I'm glad it works for you now, although I wonder what fixed it, as I don't remember fixing anything related... :-)

Anyway, closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants