5 learnings from the Cursor Team interview
The Cursor Team is filled with some seriously creative engineers
This past week I listened to a podcast that was an interview with the founding members behind Cursor. For those unfamiliar Cursor is a programming text editor with AI baked into it’s core. It’s truly an incredible product and something I use personally outside of work for writing code. I’ve used AI assistants such as GitHub Copilot for awhile but Cursor definitely feels like a step forward. I’ve already had some magical moments myself when using this editor, it’s really special.
This interview was fascinating especially since I’m diving into the problem space of AI and specifically looking at how AI can help us code better. I’ve listened to the interview a few times simply because it was so interesting hearing how they were approaching solving certain problems. They are really at the cutting edge of what’s possible right now. It’s exciting times!
That said, as promised - here are 5 learnings from the Cursor team interview with Lex Fridman.
Priompt
Priompt (priority + prompt) is a JSX-based prompting library. It uses priorities to decide what to include in the context window.
Priompt is an attempt at a prompt design library, inspired by web design libraries like React. Read more about the motivation here.
This is a unique take on prompt engineering and how best to assemble a prompt for the AI models to understand. At the end of the day prompt engineering is extremely important when getting the desired outcome from certain models. For the most part it is simply finding the best information to jam into a string of text and cross your fingers that the LLM will get it. This take on creating prompts is the sort of thing I was glad to find out existed.
Homomorphic Encryption in LLMs
This type of encryption allows for a user to send encrypted data to an LLM in the cloud (OpenAI) and then the LLM can do the normal work that it does, all without seeing the data its acting on. This then gets sent back to the user where I believe it is decrypted for the user to view. From a user’s perspective you get the benefit of using the best models out there powered by these big companies but you get the reassurance that your data is safe thanks to the encryption.
More Context More Confusion
One of the Cursor teams findings was that the more context you give a model, the more confused these things tend to get. In my own experience (very limited), I’ve seen very similar results. While it’s greater models are expanding their context windows more and more, this doesn’t necessarily change the fact that regardless of the size, the models still do understand all of the context given. It seems to forget things and it’s hard to know where exactly it begins forgetting.
Model Routing
Throughout their conversation the question was posed, can you use models to determine what model would be best for answering a specific question? AKA Model routing. This is an amazing question and one they said doesn’t have a clear answer yet and is an open research topic. They did go on to say that they have some rough experiments they’ve done to try and accomplish this.
Test Time Compute
This was brought up consistently throughout the interview. I had no idea what this was.
Test time compute refers to the amount of computational resources (e.g., time, memory, and processing power) required to make predictions or inferences using a trained machine learning model. It specifically pertains to the inference phase (after the model is trained) when the model processes new input data to produce predictions or outputs. — ChatGPT 4o
AKA How fast and how much work the computer has to do to give us the answer it’s already been taught how to do. Folks from Google think they can scale this time down much more and there is a bunch of research being done here.
Much more
To be honest, I learned so much listening to this podcast and it was extra interesting given that I’m working on something in a similar space and running into many of the same problems. If AI interests you and you want to know where the edge of AI is at the moment then I definitely recommend listening to the podcast. So many great topics discussed!