Skip to content
You must be logged in to sponsor sealad886

Become a sponsor to sealad886

@sealad886

sealad886

Ireland

Hi there, thanks for stopping by!
I'm a current medical doctor who, in a former life, developed and implemented healthcare software. I'm now having a little dip back in the software world. My undergraduate thesis was using ANNs to study gene models in humans, so imagine my surprise at what that concept has blown into with AI today!

My main projects right now are:

  • Chatparser - a tool to transform WhatsApp chat exports into useable data the way you want to use it
    This application is currently command-line only. It has two functions now, and two more to go. Right now, it can transcribe all audio recordings exported into text, so you can have a nice text-only version of your chats. It most recently is also capable of going the other direction; including the option to create a voice profile for each 'speaker' with voice-cloning from WhisperSpeech.
    To do: handle photo and video data. Plan is to use LLMs to describe and summarize each datatype (for text output). Perhaps copy video into audio stream only for voice output. TBD.
    NB: This actually is almost ready for a v0.1 release; I'm just trying to figure out licensing. Tips appreciated.
  • Med.tell.u (name tbd) - an application to create bespoke patient info leaflets
    A patient-focused tool based on Mistral-7B that will generate bespoke patient info leaflets that doctors can print on-demand at the time of discharge. High hopes for including images in thie final product, but that may be a stretch. Obviously aimed at improving my own / colleagues' practice.
    I can already see the slogan -- "Got questions? Let Med.tell.u"

Side-side-side projects:

  • Long form text writing - trying to figure out how expanded context windows can enable long form (and non-repetitive) writing on consumer-grade (i.e. my) hardware. Limited trials so far have not had a ton of success, but this also plays into Med.tell.u.
  • Ollamautil- a utility to manipulate the Ollama cache. Again, for consumer-grade folks like me. I don't have a ton of internal memory on my laptop (okay, so turns out 2 TB isn't a lot for AI models plus 20+ years of my own stuff...who knew?). So it allows you to (on a Mac only) use the default Ollama installation settings (using HOME/.ollama as the cache location) and symlink that to 'internal' and 'external' cache locations; move and synchronize models seamlesslessly between them, and toggle which cache you're using. So you can store your models externally, and load in only the one(s) you need into local memory.
    Main benefits: speed saver -- loading/running and LLM from an external drive is time consuming, even with NVMe drives;
    space saver -- external memory is cheap as dirt, and doesn't need to be fast.
    long term storage -- archive old models that you don't actively use anymore

Featured work

  1. sealad886/ollamautil

    A CLI utility for working with and moving the Ollama cache

    Python 4

Select a tier

$ a month

Choose a custom amount.