Skip to main content

Evaluation Comparison Report

This report is to enable you with a framework for comparing and evaluating SageMaker Experiments and W&B.
Created on July 20|Last edited on July 20


Hypothesis

User to add details on your ML project and what metrics you want to log using both W&B and SageMaker.

Experimentation

  • Data generation

    • Details on data generation and possible opportunity for tracking datasets with artifacts.
  • Data comparison

    • Details on data comparison and possible opportunity for comparing the differences with artifacts and tables.
  • Experiment Tracking

    • Experience with W&B
    • Experience with SageMaker
    • MISC Observations
  • Tuning your Hyperparameters

    • [Include a parallel coordinates plot from a sweep here!]
  • Sharing your findings

    • Possible opportunity for a collaborative report

Analysis

  • Details on what assumptions & observations were viewed or experienced during this project.

Conclusion & Considerations

  • What did you find easier/better about W&B for your experiment?
  • What did you find harder/more with W&B for your experiment?
  • What did you find easier/better about SageMaker Experiments for your experiment?
  • What did you find harder/more with SageMaker Experiments for your experiment?
  • How did W&B impact or help your experiment? (try to use hard examples: number of experiments launched, number of hours saved, etc)
  • How did SageMaker Experiments impact or help your experiment? (try to use hard examples: number of experiments launched, number of hours saved, etc)