Skip to Content
image description

Summer 2026 Working Connections III – Online
Week 3: Online

image description
Online

Registration is now open! 

Details for Online Summer Working Connections are now available. Check out the track options, program policies, and schedule prior to submitting your registration request.

Program Policies

The goal of the National IT Innovation Center’s (NITIC) Working Connections professional development is to equip IT faculty at two-year institutions of higher education with the expertise needed to teach their track content in a subsequent semester. This ensures that the most current information reaches their classrooms, either as a stand-alone course or as supplemental material to an existing course. 

Cost:  

  • Tuition is FREE; there is no fee to attend. 

Eligibility:  

  • Working Connections is for faculty and administrators currently teaching IT credit courses (full-time or adjunct) at a regionally accredited U.S. two-year community college or technical college.   
  • To ensure equitable access to new learning opportunities, participants may not enroll in the same track more than once. Tracks that repeat previously offered content will be clearly noted, and individuals who have already completed the course are not eligible to retake it. 
  • Attendees are expected to use what they learn in their track to teach or supervise a class in the next 12 months. 
  • High school teachers may only attend if they also teach as a community college adjunct. 
  • Seats will be limited to 2 per institution. Additional faculty will be placed on a waitlist and will receive a seat if space becomes available.

Registration:  

  • Completing the registration form requests your seat. Your seat is not confirmed until you receive a registration confirmation email from NITIC.  
  • Each individual may only submit one application for registration. Only the first submission will be considered, and any subsequent registrations will be disregarded without further notice.  
  • IT Innovation Network (ITIN) member institutions will have a priority window to register and will be notified of the dates via the NITIC mailing list.  

Attendance Requirements: 

  • This is a synchronous online workshop.
  • Instructors have been instructed to track attendance and participation. Participants are expected to attend and actively engage in all scheduled sessions. Attendance means contributing to discussions, completing in-class activities, and being present for live instructions – not just logging in. 
  • Missing more than 25% of the total class time will disqualify you from earning the Credly badge. Participants must attend at least 75% of the total instructional time to adhere to the standards of the program. 
  • If you anticipate any absence, notify your instructor and NITIC in advance. If your absence is unexpected, please notify your instructor and NITIC as soon as you are able.  
  • Instructors are not required to provide make-up work or spend time outside of scheduled sessions helping participants catch up if time is missed. Any make-up work is at the instructor’s discretion, and completion of the work does not override the 25% limit. 

Cancellation/Track Changes: 

  • If you must cancel your registration or request a track change, please notify Mark Dempsey at mdempsey@collin.edu immediately before the deadline. 
  • To be good stewards of our NSF ATE grant funding, we must fill all available seats. Attendees who register but then fail to show up without providing advance notice may be ineligible for future Working Connections workshops. Please inform us right away if you’re not able to attend. 

Tracks:  

  • Tracks run for the entire duration of Working Connections session; attendees may only select one track.  
  • Some tracks have specific pre-requisites or requirements. Be sure to read the track details before requesting to register.  
  • Tracks may be repeated throughout the year. See the track details to ensure you’re not registering for a track you’ve already completed.  
  • Seating capacity varies by lab, track, and instructor, but typically capped at 20 attendees.
  • Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project. 
  • Recordings and use of AI notetaking assistants during online tracks are left up to the sole discretion of the instructor. NITIC is not facilitating, storing, or managing recordings or AI transcriptions.
  • Be sure to check for time zone differences. You are responsible for ensuring you do not miss your track.

Completion Credential:  

  • NITIC has teamed up with Credly to provide digital badges to showcase verified Working Connection credentials.  
  • Only those who attend 75% or more of the course AND pass the required track assessment with a grade of 80% or better will receive their badge.  
  • Badges will be issued within 30 days of completion and can be showcased on LinkedIn, email signatures, or printed as a certificate. Hard copies can be printed from Credly’s website and will reflect CEUs earned. 

Survey:  

  • All attendees will complete a survey before the end of the event. 
  • Longitudinal surveys will continue to be sent after the event to measure lasting impact.  

 

AI for Educators (INTRO)

July 20-24 from 9:30am-5:30pm CT; 10:30am-6:30pm ET  

 

Description 

This online workshop is designed for instructors who want to integrate AI into their teaching. It is not a deep technical dive into AI; instead, the focus is on how AI is transforming our work as educators and how we can effectively utilize AI tools to deliver more effective, efficient, and engaging instruction. Participants will explore practical ways AI impacts teaching, student learning, and curriculum design through guided hands-on activities using accessible no-code platforms. We will examine real-world strategies for prompt engineering, AI-assisted course development, student-facing AI activities, ethical classroom policies, and preparing students for an AI-integrated workforce. You’ll leave with immediately usable resources: ready-to-adapt lesson templates, rubrics, feedback tools, sample AI policies, and a personalized AI Integration Action Plan so you can start enhancing your courses and instruction the very next semester. 

NOTE: This track will be a repeat of content provided in “AI for Educators” (Winter Working Connections online, December 2024), and “AI Essentials for Educators (Summer Working Connections online, July 2025). Participants who previously completed either of these courses are not eligible to register for this track again.

Objectives

  • Explain core AI concepts, capabilities, and limitations relevant to education, including how large language models generate responses and where they fail. 
  • Apply effective prompt engineering techniques and evaluate outputs from multiple AI tools to select the right tool for a given instructional task. 
  • Design AI-integrated course materials, including assignments, rubrics, feedback systems, and classroom AI-use policies, that align with learning objectives and uphold academic integrity. 
  • Develop a personalized AI Integration Action Plan that addresses equity, ethics, and student preparation for an AI-integrated workforce. 

Pre-requisites

None. 

Required Textbook

None. 

Suggested/optional Textbook

None. 

At-Home Computer Requirements

Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project.

Please note that content is subject to change or modification based on the unique needs of the track participants in attendance.  

Agenda

July 20: AI Foundations 

  • Key concepts in AI 
  • AI’s Impact on education 
  • The current AI landscape 
  • Current limitations and risks 
  • Working session: Train a simple image/sound/pose classifier  

July 21: Algorithms & Tools  

  • Overview of AI Algorithms & applications 
  • How LLMs work 
  • Prompt engineering & Skills 
  • AI applications: chatbots, robotics, vision 
  • Working session: Comparing current AI tools 

July 22: AI-Powered Teaching & Course Design 

  • Course development with AI 
  • Assignment & assessment design with AI 
  • Building effective AI policies 
  • Rubrics and feedback at scale 
  • Working session: Getting a head start on Fall 2026 with AI 

July 23: AI for the Classroom 

  • AI literacy for students 
  • Integrating AI into the curricula 
  • AI-assisted coding 
  • Equity, access, and discussing AI use with students 
  • Preparing students for an AI-integrated workforce  
  • Working session: Design a student-facing AI activity 

July 24: AI in the Future  

  • Working session: Building your AI action plan 
  • Agentic AI in education 
  • AI governance & ethics 
  • Emerging trends 

Instructor

Wade HeadshotWade Huber is a residential computer science faculty member at Chandler-Gilbert Community College, where he recently served on the committee that developed CGCC’s Artificial Intelligence bachelor’s degree and currently serves as AI faculty support for an NSF-funded AI Entry Pathways Grant. He brings over 25 years of software engineering experience across the telecom, semiconductor, and medical device industries, alongside many years of teaching math and computer sciencefirst as an adjunct and now as full-time faculty. He holds a B.S. from Trinity University in San Antonio and an M.S. in Computer Science from The University of Texas at Dallas. His current focus is on introducing AI into his CS courses and helping fellow educators integrate AI into their teaching practice. 

Azure AI Fundamentals (AI-900) (INTRO)

July 20-24 from 9:30am-5:30pm CT; 10:30am-6:30pm ET 

 

Description

This five-day track prepares attendees to pass the Microsoft Azure AI Fundamentals (AI-900) certification exam. Participants will build foundational knowledge of AI and machine learning concepts, explore Azure AI services, and gain hands-on experience with tools for computer vision, natural language processing, and generative AI. No prior AI experience is required. All lab work is done through a free Azure account and Microsoft Learn. 

NOTE: This track will be a repeat of content provided in “Azure AI Fundamentals” (Fall Working Connections online, September 2025). Participants who previously completed this course are not eligible to register for this track again.

Objectives

  • Identify core AI and machine learning concepts, including supervised, unsupervised, and reinforcement learning. 
  • Describe the capabilities of Azure AI services, including Azure Machine Learning, Cognitive Services, and Azure OpenAI.
  • Demonstrateuse of Azure AI tools for computer vision, natural language processing, and document intelligence through guided labs. 
  • Evaluate responsible AI principles and apply them to real-world AI deployment scenarios.

Certification Prep

Microsoft Azure AI Fundamentals (AI-900) 

Pre-requisites

Basic computer literacy and familiarity with cloud concepts helpful but not required. No prior AI or programming experience needed. 

Required Textbook

None. All content is available free through Microsoft Learn (learn.microsoft.com). This class will use the free parts of Azure. Attendees will need access to a .edu email address.  

Suggested/optional Textbook

None.

At-Home Computer Requirements

No special setup requiredAttendees need a modern web browser and a free Microsoft Azure account (azure.microsoft.com/free). All labs run in the Azure portal. Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project.

Please note that content is subject to change or modification based on the unique needs of the track participants in attendance.  

Agenda

July 20: Introduction to AI and Azure AI Fundamentals

  • Topics: What is AI, machine learning, and deep learning; the Azure AI landscape; responsible AI principles.
  • Lab: Set up free Azure account, explore the Azure portal, complete the AI-900 learning path introduction on Microsoft Learn.

July 21: Machine Learning Fundamentals

  • Topics: Supervised vs. unsupervised learning, regression, classification, clustering; Azure Machine Learning workspace; automated ML.
  • Lab: Build and deploy a simple regression model using Azure ML Studio (no-code). 

July 22: Computer Vision with Azure AI

  • Topics: Image classification, object detection, facial analysis, optical character recognition.
  • Lab: Use Azure AI Vision and Document Intelligence to analyze images and extract text from documents. 

July 23: Natural Language Processing and Conversational AI

  • Topics: Text analytics, sentiment analysis, language understanding, Azure AI Language, Azure AI Bot Service.
  • Lab: Build a basic question-and-answer bot using Azure AI Language and test it in the portal. 

July 24: Generative AI and Exam Prep

  • Topics: Introduction to generative AI, large language models, Azure OpenAI Service, prompt engineering basics.
  • Lab: Explore Azure OpenAI Studio. Afternoon: Full practice exam review, Q&A, and exam registration guidance. 

Instructors

Azure AI Fundamentals DougDoug Hampton is currently an Associate Professor and Chair in the Information Technology Academics department at Sinclair Community College in Dayton, Ohio. He holds a bachelor’s and master’s degree in information technology, as well as a master’s in education. Doug is also pursuing a Doctorate in Instructional Design. In addition to his academic qualifications, he holds multiple certifications from industry leaders such as Microsoft, CompTIA, LPI, AWS, and Cisco. Before his time at Sinclair, Doug began his educational career at a university, serving in a progression of roles from Instructor to Senior Director of Information Technology Academic Programs, spanning over ten years. He then advanced to a position as Program Coordinator at a community college in Kentucky, where he contributed for five years. In addition to his academic roles, Doug has gained practical experience as a Database Administrator and Network/Systems Administrator, furthering his expertise in the field of Information Technology.

Azure AI KyleKyle Jones is the Assistant Dean of Technology, Grants, and External Partnerships, Professor, and AI Fellow at Sinclair Community College in Dayton, Ohio. With nearly a decade of experience as Chair of the Information Technology Department, Kyle has led transformative initiatives in computer science, information technology, and cybersecurity education.

He has served as a co-Principal Investigator for many NSF awards, including the National Information Technology Innovation Center (NITIC). His work spans international cybersecurity collaborations, including those with the U.S. Embassy in Israel and Portugal, developing faculty externships that connect educators with industry, and national efforts to modernize cybersecurity and IT/OT education.

At Sinclair, Kyle leads AI education initiatives, including the AI Faculty Fellows program, the development of new cloud AI and business AI curricula, and institution-wide efforts to identify and integrate AI tools. He also co-leads workshops at Sinclair, such as “Artificial Intelligence for Educators,” supported by the NSF and NCyTE, which help faculty adopt AI into teaching and learning.

Kyle also recently presented at the CyAD Conference, focused on cross-disciplinary collaboration between manufacturing, IoT, and cybersecurity. His leadership extends into workforce development, where he partners with industry to address talent needs in IT, cloud, and data center technologies.

Beyond his administrative and grant leadership, Kyle is a dedicated educator, musician, and speaker. He regularly teaches and presents on AI, cybersecurity, and workforce transformation, emphasizing hands-on innovation, business impact, and preparing students for in-demand careers.

Course Vibing: Vibe Coding with Canvas API and LLMs (INTRO)

July 20-24 from 9:30am-5:30pm CT; 10:30am-6:30pm ET  

 

Description 

This workshop introduces faculty to a new way of building and maintaining Canvas courses using AI and lightweight coding. Instead of manually clicking through Canvas, participants will learn how to “vibe code” using GitHub Codespaces, the Canvas API, and large language models (LLMs). 

Participants will follow a practical workflow of generating, previewing, publishing, and verifying course content in real time. 

The goal is to help faculty save time, reduce repetitive work, and create more consistent and scalable course experiences. No programming experience is required. 

Objectives

By the end of the session, participants will be able to: 

  • Explain the role of GitHub, GitHub Codespaces, and the Canvas API in an AI-supported course design workflow 
  • Use AI to generate and refine Canvas-ready course content within GitHub Codespaces 
  • Preview, publish, and verify course changes in Canvas using an iterative workflow 
  • Apply basic GitHub workflows such as opening a repository, editing files, and rerunning scripts 
  • Complete a capstone project that creates or improves a real Canvas course component 

Pre-requisites  

Participants should have basic familiarity with Canvas, including the ability to navigate a course and edit simple content (pages, assignments, or discussions). They must also have access to an active Canvas course where they can make edits and generate an API token. 

No programming experience is required. Participants only need basic computer skills (copy/paste, opening links, following step-by-step instructions) and a willingness to try a new workflow using AI and simple scripts. 

A GitHub Codespaces account is helpful but not required, as setup will be guided during the session. 

Required Textbook

None. 

Suggested/optional Textbook

None. 

At-Home Computer Requirements

No additional set ups. Just standard things like: stable internet connection, working webcam, canvas access. Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project. 

Please note that content is subject to change or modification based on the unique needs of the track participants in attendance.   

Agenda 

July 20: Foundations, Demo, and First Connection 

  • Welcome and workshop overview  
  • Live demo of the full workflow   
  • Intro to APIs and Canvas API basics  
  • Generate a Canvas API token  
  • Intro to GitHub (what it is and why it’s used)  
  • Set up GitHub Codespaces  
  • First connection to Canvas (test + verify) 

July 21: Reading, Previewing, and Understanding Content 

  • Review and warm-up  
  • Explore course data from Canvas  
  • Understand structure (modules, pages, assignments, discussions)  
  • Use AI in Codespaces to interpret and organize content  
  • GitHub basics (light): working in a repo, saving changes  
  • Preview content before making changes  
  • Hands-on: read, inspect, and verify course content  

July 22: Creating and Updating Content (with Verification) 

  • Review and warm-up  
  • Generate content using AI (pages, assignments, discussions)  
  • Preview and refine before publishing  
  • Push updates to Canvas using scripts  
  • Verify changes in Canvas (did it actually work?)  
  • GitHub basics (next step): updating files and rerunning scripts  
  • Hands-on: create, update, and confirm results  

July 23: AI Workflow, Iteration, and Reuse 

  • Review and warm-up  
  • Prompting AI for structured, reusable content  
  • Rapid iteration: ask → preview → adjust → publish  
  • Improve consistency across course materials  
  • GitHub basics (optional): simple versioning / keeping track of changes  
  • Hands-on: generate, test, refine, and verify  

July 24: Capstone Build, Automation, and Next Steps 

  • Review and quick planning session  
  • Capstone Project (primarily deliverable of the workshop)  
  • Choose a real course task (build or improve a module, unit, or assignments)  
  • Use AI + scripts to generate and update content  
  • Continuously preview, publish, and verify  
  • Intro to simple automation ideas (bulk updates, reuse across courses)  
  • Share projects and reflect  
  • Wrap-up and next steps 

Instructor

JResendiz HeadshotJonnathan is an Assistant Professor at the Computer Information Systems Department and the Faculty Director of the AI Incubator at GRCC. He has a Master’s and Bachelor’s Degree in Computer Science from the University of Texas at Dallas with an emphasis on Artificial Intelligence. As the Faculty Director of the AI Incubator, Jonnathan actively participates in the development and implementation of artificial intelligence initiatives and projects at GRCC. 

Fundamentals of Quantum Programming (INTERMEDIATE)

July 20-24 from 9:30am-5:30pm CT; 10:30am-6:30pm ET  

 

Description

This five-day workshop introduces core quantum programming concepts, including the use of qubits, superposition, entanglement, and measurement, as applied in circuit design. Participants will complete hands-on activities to design and simulate circuits using Python-based tools. An overview of cloud-based quantum platforms is also presented. 

NOTE: This track will be a repeat of content provided in “Introduction to Quantum Computing” (Spring Working Connections online, March 2026). Participants who attended the Spring 2026 track may register only if space is available at the end of the registration period.

Objectives

  • Explain core quantum computing concepts, including qubits, superposition, entanglement, and measurement, and contrast them with classical computation  
  • Design and simulate quantum circuits using Python-based SDKs, and interpret resulting outputs  
  • Analyze the effects of noise and apply introductory error mitigation strategies in circuit simulations  
  • Summarize quantum resources available on cloud-based platforms 

Pre-requisites

This workshop is designed for students with some prior programming experience and an introductory knowledge of Python. While it could be understandable to motivated students at varying skill levels, familiarity with basic concepts will help ensure a smooth experience during hands-on sessions. 

Required Textbook 

Constantin Gonciulea and Charlee Stefanski “Building Quantum Software in Python.” Manning May 2025. 978-1633437630 (Digital Version available from Manning) 

Suggested/optional Textbook 

None.

At-Home Computer Requirements

Browser required, GitHub and Google accounts required. Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project. 

Please note that content is subject to change or modification based on the unique needs of the track participants in attendance.  

Agenda

July 20:  

  • Welcome and Introductions 
  • Classical vs. Quantum Computation 
  • Qubits, Superposition, and Measurement 
  • Hands-On Activities 

July 21: 

  • Measuring Qubits and Interpreting Results 
  • Common Single-Qubit Gates 
  • Hands-On Activities 

July 22: 

  • Multiple Qubits and Circuits 
  • Entanglement 
  • Multi-Qubit States 
  • Changing Amplitudes with Quantum Transformations 
  • Hands-On Activities 

July 23: 

  • Quantum Oracles 
  • Interference 
  • Errors in Quantum Systems 
  • Hands-On Activities 

July 24: 

  • Amplitude Amplification 
  • Hands-On Activities 
  • Overview of Major Cloud-Based Quantum Platforms 
  • Assessment 

Instructors

David SingletaryDavid Singletary is a faculty member in the School of Technology at Florida State College at Jacksonville. He teaches courses in software development, data science, and AI. In a previous life David was employed as a software engineer at Cisco and various startup companies in Silicon Valley. David graduated from the University of Central Florida with a B.S. and from the University of Colorado with an M.S. in Computer Science.  

 

 

pam headshotPamela Brauda is a faculty member in the School of Technology at Florida State College at Jacksonville, where she teaches courses in programming, networking, and data science. Before teaching at FSCJ, Pamela worked as a Metadata Analyst with the Florida Department of Law Enforcement, taught programming and software development at the University of North Florida, created and operated several small businesses, and taught high school mathematics. She graduated from the University of Georgia with a B.S. and from the University of North Florida with an M.S. in Computer Science. 

Hands-On SecAI+: Securing AI Systems and Building AI-Powered Cybersecurity Tools for the Classroom (INTERMEDIATE)

July 20-24 from 9:30am-5:30pm CT; 10:30am-6:30pm ET  

 

Description

AI is reshaping the cybersecurity landscape on both sides of the fence. Defenders are using LLMs to triage alerts and write detections faster than ever, while attackers are using the same tools to generate polymorphic malware, deepfake voices, and prompt-injection payloads at scale. CompTIA’s new SecAI+ certification (CY0-001 V1, launched February 2026) is the first vendor-neutral credential aimed squarely at this shift, and instructors who teach Security+, CySA+, or networking courses are increasingly being asked to fold AI security topics into their curriculum.

This five-day, hands-on workshop walks IT and cybersecurity instructors through the full SecAI+ exam objectives — Basic AI Concepts, Securing AI Systems, AI-Assisted Security, and AI Governance/Risk/Compliance — with a working lab environment so participants experience prompt injection, model guardrails, RAG security, MITRE ATLAS mapping, and AI-driven security automation firsthand instead of just reading about them. Day 5 opens with a workshop exam covering all four domains and closes with a live showcase of working AI-powered classroom tools — lab generators, AI tutoring patterns, MCP-backed exercise environments, and AI coding-agent workflows — that participants can adopt or adapt for their own courses in the upcoming semester.

Objectives

  • Explain core AI concepts relevant to cybersecurity, including the differences between LLMs and SLMs, prompt-engineering strategies, retrieval-augmented generation (RAG), and security considerations across the AI lifecycle.
  • Apply AI threat-modeling resources (OWASP LLM Top 10, OWASP ML Security Top 10, MITRE ATLAS, NIST AI RMF) to identify vulnerabilities in deployed AI systems and implement appropriate technical and procedural controls.
  • Demonstrate AI-assisted security operations, including the use of AI tools and Model Context Protocol (MCP) servers to automate detection, triage, vulnerability analysis, and incident response.
  • Evaluate AI-powered classroom tool patterns — including lab generators, AI tutoring systems, MCP-backed exercise environments, and AI coding-agent workflows — and develop an adoption plan to integrate one or more into their own course context.

Certification Prep

CompTIA SecAI+ (exam CY0-001 V1)

Pre-requisites

  • Working knowledge of cybersecurity fundamentals (Security+ level recommended; SecAI+ is positioned as an expansion certification on top of core cyber knowledge).
  • Comfort working at the Linux and/or Windows command line.
  • Basic familiarity with Python, or willingness to read and run provided scripts.
  • Experience teaching or supporting IT, networking, or cybersecurity courses.
  • A laptop with a modern HTML5 browser and an SSH client; Docker Desktop (or equivalent Docker runtime on Linux) is recommended for participants who want to run the labs locally. All AI lab infrastructure is also provided via cloud-accessible VMs for guaranteed parity, and the same labs are distributed as Docker Compose bundles. CPU-only laptops are sufficient for the full curriculum.

Required Textbook

None required. The official CompTIA SecAI+ CY0-001 V1 Exam Objectives document (free PDF from CompTIA) will serve as the primary reference.

Suggested/optional Textbook

Supplementary readings will be provided from OWASP (LLM Top 10, ML Security Top 10), MITRE ATLAS, and the NIST AI Risk Management Framework (AI RMF 1.0).

At-Home Computer Requirements

None required. Cloud-accessible VMs are provided for guaranteed parity across participants, and all labs are also published as Docker Compose bundles for those who prefer to run them locally on their own machine — useful for adapting the labs into your own classroom afterward. CPU-only laptops are sufficient for the full curriculum; GPU-accelerated work (local-model deployment scenarios) is demonstrated via the instructor’s lab cluster over screenshare, with optional opt-in cloud GPU access for participants who want hands-on time. Webcam and dual monitors are highly recommended. Tracks often require being able to read instructions and perform the project.

Please note that content is subject to change or modification based on the unique needs of the track participants in attendance.  

Agenda

July 20: Day 1 — AI Foundations for Cybersecurity Instructors (SecAI+ Domain 1)

Theme: Building shared vocabulary and a working AI lab environment.

  • Types of AI: generative AI, machine learning, statistical learning, deep learning, transformers, NLP, LLMs vs. SLMs, GANs
  • Model training techniques: supervised, unsupervised, and reinforcement learning; fine-tuning; epochs; pruning; quantization; model validation
  • Prompt engineering fundamentals: system vs. user prompts, system roles, templates, zero-shot / one-shot / multi-shot patterns
  • Data security in AI: cleansing, verification, lineage, integrity, provenance, augmentation, balancing; structured vs. unstructured data; watermarking
  • Retrieval-augmented generation (RAG), embeddings, and vector storage
  • Security across the AI lifecycle and human-centric design (human-in-the-loop, oversight, validation)
  • Hands-on lab: stand up a local Ollama instance, run prompt-engineering experiments comparing strategies, and build a simple RAG pipeline against a vector database

July 21: Day 2 — Threat-Modeling and Securing AI Systems (SecAI+ Domain 2.1–2.3)

Theme: Mapping the AI attack surface and locking the front door.

  • OWASP LLM Top 10 and OWASP ML Security Top 10
  • MIT AI Risk Repository, MITRE ATLAS, and the CVE AI Working Group
  • Threat-modeling frameworks for AI systems
  • Model controls: model evaluation, model guardrails, prompt templates
  • Gateway controls: prompt firewalls, rate limits, token limits, input/modality limits, endpoint access controls
  • Access controls for model, data, agent, and API layers
  • Hands-on lab: walk OWASP LLM Top 10 attacks against the lab model, deploy a prompt firewall (LLM Guard / NeMo Guardrails), implement rate limiting and tiered access controls

July 22: Day 3 — Data Protection, Monitoring, and AI Governance (SecAI+ Domain 2.4–2.6 + Domain 4)

Theme: Defense in depth and the rules of the road.

  • Data security controls: encryption in transit, at rest, and in use; anonymization, classification labels, redaction, masking, minimization
  • Monitoring and auditing: prompt monitoring, log monitoring/sanitization/protection, response confidence scoring, AI cost monitoring, hallucination/accuracy/bias auditing
  • Attacks and compensating controls: prompt injection, model and data poisoning, jailbreaking, model inversion, model theft, supply-chain and transfer-learning attacks, output integrity, membership inference, excessive agency
  • AI governance structures: AI Center of Excellence, policies and procedures, AI roles (data scientist, AI architect, MLOps engineer, AI security architect, AI governance engineer, etc.)
  • Responsible AI principles: fairness, reliability, transparency, privacy, explainability, inclusiveness, accountability
  • Compliance frameworks: EU AI Act, OECD standards, ISO AI standards, NIST AI RMF; sanctioned vs. unsanctioned use, Shadow AI, data sovereignty
  • Hands-on lab: execute prompt-injection and jailbreak attacks against the lab model, build a PII redaction pipeline, configure logging with sensitive-data sanitization, and complete a NIST AI RMF mapping exercise on a sample system

July 23: Day 4 — AI-Assisted Security Operations (SecAI+ Domain 3)

Theme: Using AI as a force-multiplier for the SOC and the classroom.

  • AI-enabled tools: IDE plug-ins, browser plug-ins, CLI plug-ins, chatbots, personal assistants, Model Context Protocol (MCP) servers
  • Defensive use cases: signature matching, code quality and linting, vulnerability analysis, automated penetration testing, anomaly detection, pattern recognition, incident management, threat modeling, fraud detection, summarization
  • AI-enabled attack vectors: deepfakes (impersonation, mis/disinformation), adversarial networks, automated reconnaissance, social engineering, obfuscation, automated payload and malware generation, AI-driven DDoS
  • Automating security tasks: low-code / no-code tooling, document synthesis, incident-response ticket management, AI-assisted change approvals and deployment/rollback
  • AI agents and CI/CD integration: code scanning, software composition analysis, unit testing, regression testing, model testing
  • Hands-on lab: build a working MCP server for a security task (e.g., log triage or CVE summarization), use an AI coding agent to generate Sigma or Snort rules from incident descriptions, run an AI-assisted vulnerability triage exercise on a sample CVE list

July 24: Day 5 — Workshop Exam and AI Classroom Tool Showcase

Theme: Demonstrate mastery of the week’s material, then see what AI-powered teaching looks like in practice.

  • Morning: workshop exam covering all four SecAI+ domains — multiple-choice and scenario-based questions weighted to match the official exam blueprint; 80% to pass with one allowed retake
  • Afternoon: live tour and demonstration of working AI-powered classroom tools — lab generators, AI tutoring patterns, MCP-backed exercise environments, automated assessment tools, and AI coding-agent workflows for student projects
  • Architecture trade-off discussion: hosted-API vs. local-LLM deployments, cost and privacy considerations for student data, sustainable maintenance and update patterns
  • Roundtable on adoption planning: which tools fit which courses, what’s realistic for the upcoming semester, and building a community of practice across participating institutions

Instructor

JasonZeller.CriticalInfrastructureJason Zeller is an assistant professor in the Informatics Department at Fort Hays State University. In industry, Mr. Zeller has worked for internet service providers and as a Senior Product Engineer for Network Development Group, where he was responsible for creating and writing curriculum and lab content for use in colleges worldwide. His instructional responsibilities include being the lead professor for the undergraduate and graduate Cybersecurity and Information Assurance Management courses. Mr. Zeller is the Director of Operations for the Cybersecurity Institute and Technology Incubator at FHSU and the Co-Director of the Information Enterprise Institute, which is FHSU’s Center of Academic Excellence in Cyber Defense. Outside the university, Mr. Zeller owns CypherAxe, a cybersecurity consulting firm, and is the founder of Post Rock Data Solutions, a software development startup currently focused on agricultural software. Through both ventures he hires students from high school and college programs to gain real-world industry experience. His current work focuses heavily on AI-assisted development infrastructure and the secure integration of AI agents into cybersecurity education and operations.

Back to top