Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MechanicalTurkCritiqueClient with CSV import / export #1539

Merged
merged 7 commits into from
May 10, 2023

Conversation

yifanmai
Copy link
Collaborator

@yifanmai yifanmai commented May 8, 2023

Addresses #1330

@yifanmai yifanmai requested review from percyliang and teetone May 8, 2023 17:58
Copy link
Member

@teetone teetone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @yifanmai! Overall, the implementation makes sense. I just have a few concerns:

  1. There seems to be a bug with newlines when exporting the data CSV file (see PR comment for an example).
  2. The files seem to be outputted to mturk/<metric name>. I think they would get overwritten when running multiple scenarios/models that use the same metrics.
  3. Do we specify description and reward in HELM or the MTurk UI when we create a project?
  4. Do we need the Mongo cache to keep track of which scenarios/models we export files for and have results for?

self.critique_client = SurgeAICritiqueClient(surgeai_credentials, self._build_cache_config("surgeai"))
else:
raise ValueError(
"CritiqueClient is not configured; set critiqueType to 'mturk', 'mturk-sandbox', 'surgeai' or 'random'"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think mturk-sandbox is not supported unless we change the above to "mturk" in critique_type.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed mturk-sandbox. You can still use the sandbox with this integration though.

if critique_type == "random":
self.critique_client = RandomCritiqueClient()
elif critique_type == "mturk":
self.critique_client = MechanicalTurkCritiqueClient()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the MechanicalTurk client require a cache to keep track of HITs and results that were imported?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use the cache. When we import, we load the request results into memory instead of loading it into the cache. This is mostly because our current Cache abstraction does not support modifying existing entries, and the results will change over time (e.g. if requesting 2 respondents, the result may initially have just 1 respondent if only one has responded so far, and later on it may have 2 responses after a second respondent has responded). Note that this means that your memory has to be large enough to hold all the request and request result pairs.

# Note: Instructions can contain HTML.
instructions="Rate the following summary.\n\nOriginal text:\n{{original_text}}\n\nSummary: {{summary}}",
description="Rate summaries generated by a large language model",
reward="1.00",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the dollar amount pay for a single instance/summary?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed (not used any more)

@@ -23,30 +23,34 @@ class SummarizationCritiqueMetric(Metric):

def __init__(self, num_respondents: int) -> None:
self._template = CritiqueTaskTemplate(
name="HELM Summarization Evaluation",
name="summarization_critique",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to include the full run spec and maybe even the model name in name? Otherwise, would the output template and data files get overwritten if we're evaluating multiple models or even scenarios?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single execution of helm-run with multiple models and run specs will result with only a single output layout and CSV file. This is because we maintain a singleton exporter per template.

It was previously true that multiple invocations of helm-run would result in the files getting overwritten. I updated the PR to fix this by appending a unique(-ish) timestamp to the generated file.

"""Convert from Surge AI template tag format to Mechanical Turk template tag format.

{{field}} -> ${field}"""
return re.sub(r"{{([^{}]+)}}", "${\\1}", raw_text)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested it out summarization_cnndm:temperature=0.3,device=cpu,model=openai_ada with just 5 instances and currently the output CSV file looks like this:

Screenshot 2023-05-09 at 1 11 39 PM

I think maybe new lines are not handled well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatting is correct; here's what it looks like after importing into Mechanical Turk. (I added more newlines for illustration.)

Note that newlines in raw HTML are not displayed the end user (this is just how HTML works). You will have to use <p></p>, <br> or style="white-space: pre-wrap;" if you want to render newlines.

Screenshot 2023-05-09 154717

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, that makes sense.. thanks!

@@ -80,3 +100,9 @@ class CritiqueRequestResult:

responses: List[CritiqueResponse]
"""List of respondents' responses."""


def populate_template_with_fields(template: str, fields: Dict[str, str]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this return type str?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed (not used any more).


Only used by Mechanical Turk."""

reward: str
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we specify description and reward here, how do we also specify worker requirements? Are these fields being used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed (not used any more).

The worker requirements are specified in the web UI.

@teetone teetone self-requested a review May 10, 2023 07:24
@teetone teetone merged commit a803874 into main May 10, 2023
@teetone teetone deleted the yifanmai/1330-mturk-csv branch May 10, 2023 07:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants