BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Huntsville AI - ECPv6.8.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Huntsville AI
X-ORIGINAL-URL:https://www.hsv.ai
X-WR-CALDESC:Events for Huntsville AI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20240310T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20241103T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240403T180000
DTEND;TZID=America/Chicago:20240403T190000
DTSTAMP:20260426T203921
CREATED:20240402T032141Z
LAST-MODIFIED:20240402T032141Z
UID:1622-1712167200-1712170800@www.hsv.ai
SUMMARY:Document Chunks with LLM Sherpa
DESCRIPTION:Continuing our discussion about Retrieval Augmented Generation (RAG)\, this week we will incorporate LLM Sherpa to provide chunks of text from PDF documents that have been retrieved from the NASA archive. \nOur initial attempt used PyPDF2 to read text from the PDF documents. It was very slow and provided limited strings of text that did not match the paragraphs in the documents. We’ll take a look back at what was available at the time\, and then look through the LLM Sherpa API and see what it looks like with that piece incorporated. \nAs we get further into this project update\, it has become apparent for the need to split the monolithic application into components that can be hosted and updated separately. We will go through what has been done so far to containerize both the ChromaDB vector database and the LLM Sherpa for chunking.
URL:https://www.hsv.ai/event/document-chunks-with-llm-sherpa/
LOCATION:HudsonAlpha\, 601 Genome Way Northwest\, Huntsville\, AL\, 35806
ATTACH;FMTTYPE=image/png:https://www.hsv.ai/wp-content/uploads/2024/04/LLM-Sherpa.png
END:VEVENT
END:VCALENDAR