# Usage ## Run Evaluation After the successful registration of the cloud, Cloudiator will collect the cloud resource offerings of the cloud provider. Depending on the number of cloud resource offers this might take some minutes, so time for a coffee ;-) Mowgli will compose the cloud resource offerings into VM templates that are required by each evaluation scenario. Now you can query the Mowgli framework for appropriate VM templates as described [here](Get-VM-Templates.md) ### Start evaluation After getting the VM templates you are ready to start the evaluations :cloud: :hourglass: :trophy: Mowgli supports four types of evaluation scenarios: [Performance](Performance-Evaluation.md) [Scalability](Scalability-Evaluation.md) [Elasticity](Elasticity-Evaluation.md) [Availability](Availability-Evaluation.md) Please check the respective scenario pages for further details about the execution, the supported DBMS and workloads. ## Evaluation Results All evaluation results are stored to the file system of the host that runs the Mowgli Framework in the following structure ``` opt |_evaluation-results |_SCENARIO |_CLOUD |_DBMS |_CONFIG |_RUN_X |_data # contains raw evaluation data |_monitoring # contains system usage plots |_specs # contains the applied templates |_taskLogs # additional logs |_timeseries # throughput plot of the evaluation run |_plots # contains aggregated evaluation data over all runs (manual processing steps required) ``` The `data` folder contains the raw evaluation results of the load phase in the load.txt and the CRUD (or transaction phase in YCSB context) in the transaction phase. By default the plots for throughput and latency are generated under the timeseries folder. In addition system metric plots of the Workload-API instances and the DBMS nodes are available under the monitoring folder.