# Week 7 - Evaluation of LLM outputs and structured outputs

### today's schedule

* Quiz
* Short Recap
* Short presentation of current status
* Breakout Session ‘Next Steps’
* Homework for next week

### resources

{% file src="<https://4020123021-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MHobCAnoTQkN71lOgdv%2Fuploads%2F7BJGAohBTstrr4nquq1E%2F231212_Evaluation%20of%20LLM%20outputs%20and%20structured%20output.pptx%20(1).pdf?alt=media&token=b2cbfc52-1356-43dc-880a-ec4f529dac62>" %}

### homework

* Take a look at this article: <https://blog.n8n.io/open-source-llm/>&#x20;
* Take a look at some open-source / open-access LLM frameworks:
* * Ollama (Mac, Linux):
    * <https://www.youtube.com/watch?v=Ox8hhpgrUi0>
    * <https://www.youtube.com/watch?v=k_1pOF1mj8k>&#x20;
  * LM Studio (Windows, Mac, Linux): - --<https://medium.com/@genebernardin/running-llms-locally-using-lm-studio-38070f286413>&#x20;
  * GPT4All
* Test some open-source/open-access LLMs either locally downloaded or in HuggingFace spaces or any other test environment (e.g. HuggingChat, H2O.ai, etc.)
  * Do you see significant differences in the output of proprietary and open-source/open-access LLMs?
