Inline AI code editing and generation in elixir cells #2222
Replies: 3 comments 1 reply
-
Hi @jonastemplestein! There is a more general discussion about this on #2073. In a nutshell, I think we will have to depend on some large LLM for some features, but I would love if part of the problem (such as code suggestions), was done with a model tailored to Elixir. :) |
Beta Was this translation helpful? Give feedback.
-
Ah sorry, I'll continue the conversation over there - serves me right for just search for "AI" to find old threads I'll probably still continue with my learning exercise of implementing Cmd + k on top of ChatGPT 4, as I've already sort-of got it working and I'm learning a lot from the experience. Would it be okay opening a PR for feedback at some point, even if it probably won't get merged? |
Beta Was this translation helpful? Give feedback.
-
@josevalim I've had a go at implementing this here: jonastemplestein#1 Any time you or the team can spend giving feedback would be much appreciated :) I mostly have no idea what I'm doing so the implementation isn't very good, but the gpt4 version works pretty well for a lot of stuff already |
Beta Was this translation helpful? Give feedback.
-
I've been using the Cursor editor for coding recently and have found the inline code editing via ChatGPT to be incredibly useful.
Here's an image to show how it would work:
I thought it might be a fun learning project for me, as I'm new to Elixir and Phoenix, but have some time on my hands. With some pointers I'm sure I could implement it.
Beyond this, I think it would also make sense to integrate other AI features, such as github copilot style autocomplete and an AI chat in a sidebar (perhaps one that could come in on the right side like the output panel). And possibly even a full "coding loop" like open interpreter or chatgpt's advanced data analysis (formerly called code interpreter). I'd really like to use livebook for any kind of data-wrangling tasks, but at the moment it's often faster for me to use those other tools and do the analysis in python.
I guess the big strategic question is whether you want features that are so tightly coupled to a particular LLM API. Maybe you could even have a central LLM abstraction within livebook where you could also run e.g. a fine tuned code-llama model or similar to power these features 🤔
Beta Was this translation helpful? Give feedback.
All reactions