
Getting started with local Nano LLM’s
In this presentation, we’ll walk through how to get started with running local large language models. We’ll look at how to set up guardrails, create model files, and use different tools to prune and trim models for targeted use cases. We’ll also compare configurations across platforms, including CUDA and Metal (MPS). To wrap up, we’ll discuss future directions for locally run LLMs and share a project where we’re exploring how to integrate them into our learning management system.
NITIC Webinar Policy Update: AI Notetaking Tools Not Permitted
Presenter(s): Kyle Jones