In this article, we’re going to talk about a few ways in which I use AI in the terminal, starting with running models on my terminal.
Running Models on Your Terminal
For that, my main tool of usage is Simon Willison’s LLM CLI. If you don’t know who Simon Willison is, he’s just an amazing builder who makes these incredibly useful tools, a lot of them for the terminal, and he was also the co-creator of Django, if I’m not mistaken. He’s just an amazing guy; he’s just one of the best guys to follow right now if you want to keep up with AI tools and AI in general.
So his LLM CLI is pretty awesome. Essentially, it’s a CLI utility for interacting with large language models. It’s extremely simple to use. You can use it with pip install or with brew install if you’re on a Mac. You can even use it with uv, which is really cool. So you just copy this pip install, you open up a terminal, you run this, and when you do, once you’ve done that, you have to set up your OpenAI keys.
You can run:
lm keys set openai
That will prompt you to set up your OpenAI API key. And once you’ve done that, you can do things like:
llm 'Hi tell me three jokes in three different languages English uh Portuguese and Italian.'
And there we go, you can interact with large language models directly on your terminal, which is pretty incredible.
Besides that, you can also set up different models. You can do—there’s a bunch of functionality behind this which I really like. One of my favorite ways of using it is with different models. For example, I can set up -m and then I can set up whatever model I want. There’s obviously a set of models that are supported, and you can check that out by going to the documentation. You can check out here all the things that you have available with the CLI.
To see which models are available for you, run lm models:
lm models
You can see all the models that you have available with the CLI. As you can see, I have a bunch of OpenAI models and Anthropic models like Claude, etc. And you can see here at the bottom that the default model is gpt-4-turbo because it’s like fast and good enough for most of the things that you would do in a terminal. I usually use either the default or gpt-4o or claude-3-opus-20240229 with at least with this LLM CLI. And you can also set up local models by using the by installing the plug-in llm-gpt4all, which you can find here on the documentation as well. So that’s the first one that I really like. So check that out.
Ollama
Moving on, the other tool that I really like is Ollama. So Ollama is essentially an amazing tool for running local models, and it works perfectly on the terminal. Essentially what you do is you go to ollama.com, and here you can download the software that you’re going to need so that you can actually run on your terminal. Once you’ve done that, you can go to the Ollama GitHub, and here, once you’ve installed Ollama on the software, you can run this command, for example:
ollama run llama3
Or run whatever model you want, and that will run the model in a chat interface. So if I run ollama run llama3, now I’m interacting with the model Llama 3. If you’ve never downloaded the Llama 3 model on your terminal, that’s going to download that for you automatically.
Now, the question that you can ask is, “Lucas, how do I know if I can download a model to my machine?” Like, it might be a model that’s too big for your machine. So what you have to do is you have to go to ollama.com, you go to models, and then, for example, I love using Gemma, the recently released open-source model from Google. So you can come in here under that model, specify the number of parameters you want, and for that, you’re going to see how much memory that model is going to occupy. As you can see here, the model is going to occupy like around 5GB, and that’s fine for me. I have like 128 gigs of RAM in my machine and a Neural Engine GPU with 40 cores; it’s like a powerful Mac, so I can run that.
And essentially, most of the times, you’re going to be able to run up to like the 27 billion parameter, 32 billion parameters, which are around from like 15 to 20 gigs of RAM they’re that’s going to require from your machine in order to run. And more than that, then you’re going to have to look at inference providers like Groq and others like vLLM or SGLang, etc. But once you’ve done that, you can run, as I said, run whatever model you want, like for example, gemma, and you can run a command like:
ollama run gemma 'hey uh write a one paragraph essay on why the terminal is the best tool for productivity'
I don’t know, whatever you want, and that will run that prompt, that command automatically. And this is running locally on my machine, which is so awesome.
Right now, next thing in line here, folks, is there’s a bunch of other tools that you can run on your terminal. There’s probably—there might be even better ones than the ones that I’m talking about here, but I really do like these ones. There’s also Llama.cpp, and a bunch of—LM Studio has a CLI too. There’s a bunch of other options that you can take a look.
Working with Databases
The second thing I want to talk about is working with databases, and this is something I’m just putting here because of the amazing Datasette CLI invented by the one and only Mr. Simon Willison. And I really like this tool, especially because when I’m—because I work so much with LLMs, I can do stuff like this, which is if you go to the Datasette CLI GitHub, so datasette-cli GitHub, right, and you come over here and you go this link, you go down, you go down, and you run:
brew install datasette
You run this. Once you’ve run this, you can run commands like this. And let me just show you. There you go.
datasette `llm logs path`
What’s cool about this command is that it will set up—it will open up your database of all your interactions that you’ve had with the LLM CLI. So this is getting the path to the logs folder where the database of all your chats are stored, and you open that up with Datasette. So what you can do is you will open up this link on your localhost, and now here we go. And now you can see, for example, all—if I click on logs, you can say—let’s—if you can run this SQL command, and you can have all your—you can have access to all your interactions with the LLM CLI.
For example, if I go here to responses, there you go, “10 ideas for whatever for whatever,” and we can filter by model. So I can say gpt-4-turbo, we can apply, and now these are all my current, my recent interactions with that model. And you can do all sorts of cool stuff with this. I mean, essentially, you can use it for taking all of this information of all my previous interactions with the LLM CLI, and you know, I’m going to actually do a presentation recently on like the different ways that I use LLMs, so this information is going to be absolutely key. So it’s pretty cool. You can use Datasette for all sorts of things, but this is definitely one of my favorites.
Working with Embeddings
Now, the next example that I really like is working with embeddings. Now, if you don’t know what an embedding is, an embedding is just a vectorized representation of text. It means a list of numbers that represents the semantics that’s inside of a text. And you can work with those in the terminal to quickly search through a bunch of documents and things like that. So for that, we’re going to use Simon Willison’s again, the LLM CLI, because it supports embeddings, and it works super well.
So what you can do is pretty simple. You go to the “Embeddings with the CLI” section in the LLM CLI documentation, and you’re going to see examples on how to create a simple embedding for a phrase and then for multiple documents, etc. So if we just copy this command here, we can go to the terminal, and then we can create an embedding for this phrase using this model called ada-002.
llm embed -m ada-002 -c 'Hello world'
And what you’re going to get is just a list of a bunch of numbers which represent the embedding of this phrase. Now, pretty cool. That on its own is not useful. But what we can do, we can use this to essentially create an embedding for potentially a bunch of phrases, and then when you want to search through those, you can run a command that easily searches through those, ranks them according to how close they are to your query, and then gives you an output.
For example, I can do—I create—I can create this embedding, and then I’m embedding this phrase, “my happy hound.” So I do that.
llm embed -c 'My happy hound' > hound.json
And now what I can do is I can run a command called llm similar, which comes with the LLM CLI. And what I can do is I can look up, okay, similar phrases to “hound,” which obviously is going to return this, right, because it’s the only one in this particular embedding.
llm similar hound.json -c 'hound'
And there you go, we just created that. So super cool. And what’s even cooler is that we can do that with, for example, a bunch of notes in a folder. So I can run a command which is called embed-multi.
llm embed-multi my-notes -m jina-embeddings-v2-base-en --files sample-notes/ '*.md' --store
In this case, I’m creating this embedding of my notes, which I’m calling my-notes. I’m using this model, jina-embeddings-v2-base-en, and from the files that are in this folder on my current folder called sample-notes, and it’s all the markdown files. And then I’m storing that in the notes.db. And what’s cool is when we do this, we create this database, this SQLite database, that we can then query with llm similar.
llm similar my-notes -c 'prompts about learning'
So in this case, I’m going to be—I’m going to be getting all the similar information from this my-notes embedding that’s stored in the notes database, and I want similarity to this query, “prompts about learning,” and so on. So if I run this, this is what we’re going to get. We run this, and there you go. We get a ranking of all the embeddings that are similar to that. We get the score. So you see there’s an 85% score here. And this score is—I’m not 100% sure, but it’s probably just a cosine similarity between the query and the embedded notes. And we get the content of those notes that’s similar to what we were asking. So you can see that in the file ai-prompt.md, we had the content that was the closest to “prompts about learning,” which makes perfect sense, right? So this is pretty cool. Definitely check out LLM CLI with all its capabilities. It’s freaking amazing.
Working Beyond Text
Now, working beyond text, folks, we can—I love working with screenshots. And again, the one and only Simon Willison has an amazing tool about that called shot-scraper, which essentially allows you to work with—create a full screenshot from a web page by just running a simple terminal command.
shot-scraper https://github.com/simonw/shot-scraper
So I can run shot-scraper, and in this case, I’m getting the screenshot of the GitHub repo for shot-scraper. And when I run this, it’s pretty cool because it saves locally here to this file, right? And I had already run it once, so it saved another image. And when I open that up, you can see that I get a full screenshot of that repo, which is pretty freaking awesome because then you can work with this to do different things. And later on in this article, we’re going to see how to work with images with the LLM CLI and in different, different interesting ways. So working with screenshots is awesome.
Piping
And finally, we can get to piping. Now, piping is an essential part of working with AI in the terminal because you’re going to be able to get input from one place and then take it to another place, and so on and so forth. For example, what I can do is I can say:
llm 'create a fake database of people with their names ages and interests as a single .txt file' > sample_db.txt
With this command, I’m going to save that to sample_db.txt. All right? So what’s going to happen is we’re going—we’re saving the outputs of this call to the LLM to this file called sample_db.txt. And now I can do cat sample_db.txt, and you can see that we created this fake database here. It’s not perfect; we got a little bit of like some extra stuff that usually we can trim it. But now we have this sample_db file, age, interest, etc., everything into that in a txt file.
So now I’m going to come back to the terminal, and what we can do now is I can say cat sample_db.txt and I can pipe that to an LLM, right? So I can say something like, for example:
cat sample_db.txt | llm 'retrieve the oldest person in this DB with all their info'
And now, there you go, “56, interest: making, writing, collecting.” All right, so let’s see, 50, 49, 56. Yeah, that’s the oldest person in this DB. That’s pretty cool, and it got the information correctly. So the point is to show how you can cat the outputs of a file and pipe that with the pipe operator to the LLM CLI and then do some information. And you can do that in many different interesting ways, and we’re going to see a few of those in just a sec.
So it’s pretty cool. We can do different things, and often we can clean up. I have a bunch of aliases here like clean-markdown, mermaid, python, html, jinja, bash, diff. Because sometimes, for example, I can say:
llm 'what is a command to list five random files in current directory and show their contents' | clean-bash
clean-bash is a little bash one-liner that cleans up—it’s either a bash one-liner or a Python script, I don’t remember now—but it cleans up the outputs of this LLM, getting only the command part that’s going to be in the output of this model. So when I run this, folks, this is what we’re going to get. Yeah, I was supposed to get just this, but I guess there was no bash; it should come with bash in order for this to work perfect. But that’s all right. We can say something like:
llm 'generate a markdown file with a three paragraph essay on how the terminal is awesome' | clean-markdown
So hopefully the output that contains the ticks and the markdown part are going to be cleaned up from this output. So if we wait a sec, we should get—there we go. And we get it perfectly. We just get the markdown in that output. As you can see here, this is just the markdown, which is pretty awesome. So piping is awesome. You can use it in many powerful ways.
Creating Aliases
Now, creating aliases. I mentioned the idea of creating aliases before, and it’s super fun when you integrate with LLMs. For example, I could say something like:
alias summarize_this_info="llm -m gpt-4o 'summarize this'"
So now that I have this command, summarize_this_info, what I can do is, for example, if I go to my sample_db that we just created, right, and I say:
cat sample_db.txt | summarize_this_info
Hopefully, what we’re going to be able to get is a summary of this file with just a single command. There you go. This is awesome. “This list contains a bunch of data,” and there you go. We get a summary of that information, which is really, really cool. You even got more information that I wanted. So you can do that with a bunch of things. For example, you could do this to summarize things, to create specified summaries with bullet points. You can create all sorts of fancy, interesting templates. And aliases are super powerful. And then if you want to reuse them forever in your terminal, you go to your aliases file that usually would be on your root folder from your machine, it’s called .aliases or .bash_aliases, and then in there, you would save this command that you just did, alias, etc., etc., save that in another line, and then you’re good to go. You can reuse this forever. Every time the terminal runs, it’s going to load up those aliases, and you’re going to be able to reuse this command, which is super cool, awesome, amazing. So aliases are awesome, folks.
Working with the Clipboard
Now, working with the clipboard. This is one of my favorite things because pbcopy and pbpaste are really like time-savers for me. So we can combine this idea of piping files with quick copy and paste in order to be, you know, agile when working with LLMs. For example, I could come over to the website here, I come over to, for example, this section of the “Embeddings with the CLI” in the documentation here for Simon Willison’s tool. I can hit Ctrl+A and then Ctrl+C. So now I have this on my clipboard. So what I did is I could run pbpaste and then pipe that to llm.
pbpaste | llm -m gpt-4o 'extract from this documentation all the relevant commands to work with embeddings in the terminal properly indexed into a markdown style' | clean-markdown > embeddings-cli-main-commands.md
So what’s happening is, hopefully, I’m going to paste from my clipboard, I’m going to pipe that to a model and run this prompt, and then I’m going to clean it up with clean-markdown, and then I’m going to save that to a file called embeddings-cli-main-commands.md. So when I run this, hopefully everything’s going to work, and we’re going to be able to see the results. So one, two, three, and there you go.
So now if I go to embeddings-cli-main-commands.md, there you go, folks. All the proper commands have been extracted from that documentation and saved to this file in the style that we were going for, which is pretty awesome. You see embed-multi, you see embed-multi, embed, embedding the phrases, everything got saved properly. So this is pretty cool.
So the thing is, I don’t like writing all of this, so I wrote a little alias where pbpaste for me is just the letter p, and pbcopy, which copies to my clipboard, is the letter c. So I can do, for example, we can, for example, paste the outputs of this documentation and then say to the model:
p | llm 'summarize these docs into two sentences'
And when I run this, it should take the output of my clipboard and summarize it quickly. And you get that output. Beautiful. “Store embeddings,” etc. Everything’s correct. So this is just amazing. It’s fast, it works perfectly, and I just have to write the letter p to take whatever and paste it, send that to an LLM. And I can do the same in order to get a conversation or an explanation from an LLM and paste that to my clipboard.
llm -m gpt-4o 'write down 10 ways I can use the terminal to increase my productivity' | c
That’s going to be the same as writing pipe pbcopy, which means it’s going to copy to my clipboard. So when I hit this, nothing’s going to be shown in the terminal, but something was copied to my clipboard automatically. So now if I open up an editor and I just paste, see, all of that answer from the model got saved to my clipboard. So it’s pretty cool. “Automating repetitive tasks, efficient file management, command-line tools,” blah, blah, blah, everything got saved here. So it’s pretty cool. So that’s another way of working with AI in the terminal that I really like, by integrating this idea. So definitely check this out.
Chaining Prompts
One thing that you can do with piping that’s pretty cool is you can chain prompts together. That’s another interesting way of working with AI. So what you can do is you can do something like, for example:
llm 'write three jokes about AI and life' | llm 'give each of these jokes a title and output the title plus the joke as a bullet point'
Now, what we’re doing is this is the first prompt that’s writing three jokes about AI and life, and then the second prompt is giving these jokes a title and then getting the joke plus the title as bullet points. All right? And when I run this, we’re going to send the outputs of this first prompt to this model, which is going to process this one, and then we’re going to get—there you go. We get all that information. So we get like a title and a joke, a title and a joke, etc. And you can do that, you can expand this to do whatever you want, and you can do a bunch, you can do multiple prompts, etc. So it’s a pretty interesting way of working with AI in that sense.
All right, so chaining prompts is really fun. And you can obviously, for example, when it comes to chaining prompts—so folks, I have a paper here on my folder, right? It’s about using prompts to teach learners how to effectively use AI code generators. Great. So what we can do that’s really interesting when thinking about this idea of chaining prompts together is that we can run a tool called pdftotext, right? And we can send that paper.pdf.
pdftotext paper.pdf paper.txt
So this is going to get the text from this PDF. So now I have the paper as a text file right here. So what I can do is I can say cat paper.txt and I can pipe that to a model. I’m going to pipe that to gpt-4o, and I’m going to say:
cat paper.txt | llm -m gpt-4o 'write a comprehensive list of all the main findings and insights from this paper' > paper-insights.txt
So you can work with different documents in interesting different ways, and this is one of my favorites to like get summaries of papers real quick. So now if I go paper-insights, we have all the insights of that paper: “Rise of LLMs in computer education,” “Prompt engineering pedagogy,” etc. So it’s a pretty cool way.
Fun & Interesting Repositories
Now, folks, some fun, interesting repositories that I think you should take a look that can complement the usage of LLMs in the terminal. There’s a bunch, but like, I noted a few that I really like. I’m going to start with Git-Ingest.
Now, folks, if you go to any GitHub repo, so I’m going to go to the llm-cli GitHub repo. Okay, so this is the repo. So if you go to any repo and you go to the URL and you come over here and you write gitingest.com, so you replace the github.com part with just gitingest.com, what you’re going to get is this, which is an amazing free tool that essentially takes your entire repo and converts it into a single file that you can send to a large language model.
Now, there’s so many different interesting ways that you can use this for. The simplest one would be to copy all the contents of that repo and then pipe that through your clipboard the way that I just showed you before to get answers and outputs from an LLM. This is a really fun way. However, you have to take a look at the estimated tokens, which this tool also provides, which is amazing. Because, for example, 171,000 tokens is more than the context length of a model like gpt-4o. However, it is not more than the context length of claude-3-opus-20240229, for example. So I could say “copy all,” and I could do something like this. I could paste that and then pipe it to claude-3-opus-20240229.
p | llm -m claude-3-opus-20240229 'summarize this repo and give its core file structure'
And then I could say something like, “Summarize this repo and give its file and give it its core file structure.” All right? So we’re going to do that, and we’re going to see what happens. So we’re going to wait because it’s a lot of tokens to process. So we are—we are making calls to API calls, so remember that the cost associated with tokens is going to be involved. And there we go, summarizing the repo. So it got all that information, so it’s created a beautiful summary, it’s getting the core stuff and the core info. This is pretty cool. And there you go. I mean, Git-Ingest is extremely useful.
Now, on the same style of Git-Ingest, folks, you also have something called r.jina.ai, which is from a company called Jina, which is really cool. And essentially what we can do with this is if I go to, for example, the web page, whatever web page I want, for example, in this documentation page from the LLM CLI we’re using that a lot, and I go to the beginning of the URL and I write r.jina.ai/, this is going to transform this web page into a markdown file that I can send to a model. So now this is just a single file with all that information in a markdown style, which is much more appropriate for LLMs. So I can run Ctrl+A, Ctrl+C, and now I can do the same thing that I just did before. So this is another tool to work with LLMs because you can essentially just copy all of this and send that to whatever model you want. You don’t even have to send it in the terminal; you can send it to ChatGPT, whatever, but it’s another different, interesting way of working with.
Now, moving on, this is another one that I really like. Moving on, RepoMix does the same thing as Git-Ingest, but it’s something you can install locally on your folder. So RepoMix is really cool. Essentially what RepoMix does is this. If I go over here and I go repomix, so this is a tool to essentially pack an entire repo into a single AI-friendly file. So you can do the same thing that Git-Ingest does, but with RepoMix, you’re doing that in your own local computer. So maybe I clone a repo, if I clone this repo, I could use RepoMix in my local machine and then get everything about this repo into a single file. So RepoMix is another one that I really like, and I use it quite a bit.
Now, in the same style, and again, the amazing Simon Willison has a really cool tool called files-to-prompt, which does the same type of thing but for any type of folder, any type of structure, any type of whatever. So it’s really cool. You just run pip install files-to-prompt, you go to the terminal, and then, for example, I have a bunch of files here, right? I can run files-to-prompt . and what’s going to happen is you get with files-to-prompt this single output containing every single, every single thing that you have in that current folder and all the nested folders. So if I go files-to-prompt and if I write files-to-prompt ., I can copy to my clipboard, obviously, or I can save it to single-llm-ready-file.txt. It will not do that for images, so we’re getting errors because it tried to do that for images. So that’s the one thing, you’re going to get an error when you work with images, but everything that’s not an image got or, sorry, not an image or a PDF got saved to this .txt file, got indicated there. So if I go cat single-file.txt, we get all that information from all the contents of everything into a single file. So it’s all that content into a single file. So it’s another tool that’s really useful. Obviously, you can write a single bash command to do that, but it’s nice to have like something that does that for us automatically. And again, it’s by the one and only, the great Simon Willison. Come on, how can you not love this guy?
Agentic Use Cases
Now, another thing I want to talk about is agentic use cases, which I really, really like. Agentic use cases in the terminal essentially mean being able to execute stuff, perform actions. So far, we’ve been talking about creating files, editing files, we’ve been talking about this idea of, okay, send and making API calls. But how about taking action in your terminal? And there’s a bunch of interesting ways you can do that.
Now, the LLM CLI doesn’t necessarily support search, but there are some options with the plugins, like there’s the OpenRouter, there’s Gemini that I think can do search, there’s a few options there. But you can do, you can do some fancy things with agentic use cases in the terminal. For example, there’s a bunch of examples of simple, simple terminal agents like from AutoGen, there’s something called magentic-one-cli. And magentic-one-cli is actually just a very simple tool from the folks that created the new 0.4 version of AutoGen. And you can just run pip install magentic-one-cli. And if you do that, so I’m going to run that right now here, there you go. So it’s going to work with Playwright, so it’s going to do browser automation, things like that. And when you do that, we can, I’m just going to copy this prompt:
magentic-one-cli 'find flights from Lisbon to Ireland and format the result in a table'
Let’s see what happens. So what’s really cool about this tool is that it’s going to work in the terminal, but it’s going to do stuff outside the terminal by using a browser. And hopefully, this little simple demo is going to work, and we’re going to be able to see that in, you know, live, working in just any moment now. There you go. And as the agent works, we’re going to be able to see what the agent is doing, what is the information that’s getting, and all of that good stuff. “To answer this request, we have assembled a team.” So it assembled a bunch of agents to do this thing for me: computer, computer terminal, give, verify facts, to look up facts, to review. So now when it goes to a page, we can actually see it. This is a really cool feature of this use case. We can see what it’s doing, we can see the pages that it’s accessing. So it’s really nice to see it work and then get that information and to have that like compact into like a single tool that you can just run in the terminal. It’s pretty freaking awesome, right? So it’s going through pages, it’s browsing. Usually, it’s not like a perfect use case; it might get stuck in some page, it might get stuck. So you have to be aware of that and just test it out. It’s not something that I would say you should use on a daily basis, but it’s something interesting to experiment with. And I’m going to stop the execution of this agent, but you should definitely, you know, go experiment for yourself with your specific use case.
So another one that I really like is if we go here on is the llm-term, which is like a simpler version of this, essentially for you to run simple local commands in your terminal. So llm-term is a simple tool that allows you to do commands like:
llm-term 'list files in the current directory'
And it will create that command and then execute it for you. It’s something that you can implement kind of like in a simpler way by just doing like a little bit of hack tricks with bash and the LLM CLI, but it’s another one that I kind of—I think it’s kind of interesting because the concept is running—it creates a command in your terminal and then runs it, and you can, you know, say, “Okay, yes, you can run that command,” “Don’t run that command,” etc., etc. So it’s pretty cool.
Now, another example that I like is claude-code, which is in research preview right now. claude-code is essentially a tool by Anthropic, the team that created Claude 3.7 Sonnet, which is one of the best models in the world for coding. And they have this tool that essentially allows you to create an entire project with code, an entire app, everything from the terminal. It’s similar to another tool that I like called Aider, which does a similar thing. Aider actually came before, but a lot of people are like super hyped about claude-code, but Aider and claude-code do essentially the same thing. You can do development in your terminal. For example, I’m going to just say claude-code. I’m going to create a folder called agentic-coding, I’m going to go over there, and if you’ve already installed claude-code with this command:
npm install -g @anthropic-ai/claude-code
Which requires you to have Node.js 18+ installed in your machine, run claude-code, and then you can say, “With your permission, yes, it can execute files.” And now you get this very nice UI to set up whatever project you want. Like, for example, I could say:
Build me a simple UI in Python to download and visualize cool images from free APIs like NASA or Unsplash. And Claude is going to work autonomously within your machine and get stuff done for you. It’s pretty amazing. It works really well.
Working with Python Scripts
Now, I could do a full article just on this topic, and I’m not going to do that because it’s way too much work. I’m actually going to just for now, I could—I’m just going to show you like a few summary examples, but I will probably do like a single article just on this topic because it’s fascinating.
And essentially what this allows you to do is you can explore different ways of building simple Python scripts in a way using a tool called uv for package management that you generate a single file that is a tool on itself. And there’s different interesting ways that you can integrate the idea of uv into this, but I don’t want to get too complicated right now. So I’m going to show you just a few examples.
All right, as you can see here, Claude is already working really well, and it seems like it worked. This is pretty cool. See if it worked. image_viewer. That’s pretty cool. Let’s see, let’s see, let’s see. And if I open it up, oh, look at that. That’s amazing. Fetching an image. That’s so cool. Unsplash, fetch an image. All right, Unsplash didn’t work. Dog API. All right, let’s fetch an image. Ah, that’s amazing. That’s so cool, folks. I just made this with Claude in like a—like one single prompt and with the basic requirements from my own virtual environment, it already works perfectly. It’s just the coolest thing ever, right? So yeah, that’s the power of agentic use cases in the terminal. So check that out.
But now we’re talking about Python scripts. So for that is actually pretty cool. The inspiration for this, obviously, was again, the one and only, the great Simon Willison, wrote a really fascinating article called “Building Python tools with a one-shot prompt using uv run and Claude Projects.” You might imagine why I’m using Claude Projects, right? I don’t think I did as good of a job as he did, obviously, but it’s pretty cool. The idea is if you go to the uv documentation, there’s this idea that if you write a Python file and you add some inline metadata to the file, like in this example here, as you can see, and you specify within the file the dependencies for running that file, uv will parse that metadata and set up everything for you automatically and run the file out of the box.
This is so cool because, for example, I can say:
llm -m claude-3-opus-20240229 'create a single Python file that can be executed out of the box with uv run to plot some interesting data from some free easy-to-access API and show that in a nice dashboard leveraging matplotlib' | clean-python > sample-plot.py
I’m pasting that information from the uv documentation, I’m sending that to Claude, I’m asking for the script, I’m hopefully cleaning up the Python ticks, and then I’m saving that to a file called sample-plot.py. So let’s see if that works. Well, okay, but we have the sample-plot, which if I just do cat, okay, that looks like it worked. It looks like it worked pretty well, and it generated the script.
After a little bit of debugging and fixing the script with the help of the LLM, I can run:
uv run sample-plot.py
And let’s see if it works. So packages already installed, everything works beautifully. So I didn’t have to worry about that. And let’s see if it actually generates the dashboard. Okay, so now, there you go. So as you can see, folks, I mean, how crazy is that? So it plotted some temperature and Bitcoin and crypto stuff and stock price information. So everything ran out of the box just like that. So this is pretty awesome.
And what’s cool is that, for example, if I go to one of the files that I had, like navigate_llm_output.py, so I have this file for navigating the outputs of an LLM, right, in the terminal, but it doesn’t have the uv metadata, right? So I can copy this file and then I can go to my Claude Projects here where I have the instructions from the uv documentation exactly on how to do that inline metadata, how it works from the uv documentation, and I added those instructions here. So I can come here, I can paste this here in this tool that I did with uv with Claude Projects, and hopefully what I would get is, there you go, the creation of that script with the correct uv inline metadata, which is what you’re seeing right here: pyperclip, etc., etc., with the script, etc.
So what I can do is we can copy this and see if it works out of the box. So I can copy this, I’m going to come over here, I’m going to paste that to navigate_whatever.py, and we’re going to run uv run navigate_whatever.py. The only thing is that I have to run this on some output, some input. So I forgot about that. So here’s what I’m going to do to fix that. I need an example on how to use this.
llm 'generate five suggestions for projects involving AI in the terminal as bullet points' | uv run navigate_llm_output.py
So hopefully this works. And there you go. So this is really cool, folks, because now this UI, this very simple that you’re seeing, is the Python script running on the output of the LLM that I just ran on the terminal. So this is just a very simple thing where I can just select whatever the LLM gives me and then see what I want to copy to my clipboard and what I don’t want to copy. For example, I want to delete this bit, delete this, this, I don’t want this. Maybe like from the examples, I don’t want this one, I don’t want this one, I just want these. So now when I’m done, I can say “copy and exit.” So I can press c, yes, and now I have all of that copied, only the information that I wanted. So this is the power of integrating everything that we’ve been talking about with Python scripts to make our workflows go completely crazy. I mean, this is just—it gets really cool as you go more and more into these ideas.
Jinja Templates
The final example that I want to talk about is Jinja templates, which is really, really nice, which is a way to do prompt templates that work directly in your terminal. So how do you do that? So first, I’m going to show you how it works. So I can, for example, let’s say that I wanted to do—I have one that’s for creating Anki flashcards. So I’ll show you what that means. Imagine I go to some paper, so archive, “Attention Is All You Need.” I always use this paper for examples. So I’m going to copy all this information from this paper. All right, so I just Ctrl+C, go to my terminal, and I go p, and now I’m going to write pmpt, which is the alias that I have on the terminal, and then I’m going to write anki, and I’m going to autocomplete with ankicards, which is a template that I have written as a Jinja template that creates Anki cards that can be automatically uploaded to Anki, the software. And then I’m going to pipe this to a model.
p | pmpt ankicards | llm -m gpt-4o
So that’s all. I’m going to take this paper, pipe it to ankicards to this prompt that uses some sort of way of some template to create Anki cards, and then I’m going to send that to a model. And look what I get in return. So when I run this, folks, this is what we get. As you can see, the cards are being created based on the paper, and I can, if I wanted to, automatically upload them to Anki, but we’re not going to do that right now. But as you can see, the cards are being created, they’re related to the paper, obviously, “Transformer model,” “Attention Is All You Need,” etc., etc. I could go very specific.
But now let’s see how this type of thing works. So what I’m doing here is if I go which pmpt, it indicates that I have a file which is a Python script called jinja_templates.py. So first thing is, pmpt is a little bash function that I have on my aliases, and it goes to this Python script. This is the alias, and what this alias does is essentially, I have a location for where my template is, and it’s inside of this llm-templates folder, which is in the current folder where this file is located. And all this thing is doing is it’s a simple function to parse information that’s coming in from the input from the terminal and another one to render the template, the Jinja template, using this basic three-line script where I set up the environment, I load from the templates, I get the template name, I render that template, and that’s it. And this part is just essentially parsing the input from the terminal.
Now, what’s cool is that if I go to ankicards.j2, this is the template that just went on to create those cards based on the paper.
Create Anki flashcards for the content below. Try to capture all the main information.
<h3 id="your-gateway-to-a-career-in-tech">Your Gateway to a Career in Tech</h3>
<p>Have you ever wondered if using GitHub can actually help you land a job? Let’s break it down and see how this platform can be a game-changer for your career, especially in technology fields like web development.</p>
<h3 id="understanding-the-basics-git-and-github">Understanding the Basics: Git and GitHub</h3>
<p>GitHub is a web-based service that hosts code repositories using Git, a powerful version control system. This means you can meticulously track changes in your code and collaborate with others efficiently. If you are learning web development fundamentals, getting comfortable with Git and GitHub is absolutely essential. It clearly demonstrates that you have practical skills in managing software projects.</p>
<h3 id="building-your-professional-portfolio">Building Your Professional Portfolio</h3>
<p>One of the most significant advantages of using GitHub is portfolio development. By uploading your projects, you create a public portfolio for potential employers to review. This portfolio can include various projects like websites, applications, or coding exercises that showcase your knowledge of numerous programming languages and tools.</p>
<p>Employers are often on the lookout for candidates who not only know programming languages such as:</p>
<ul>
<li>Hypertext Markup Language (HTML)</li>
<li>Cascading Style Sheets (CSS)</li>
<li>JavaScript</li>
</ul>
<p>…but also understand how to use GitHub for version control. When you present clean and well-documented code, it signals a high level of professionalism and technical competence.</p>
<p><strong>Note:</strong> Here is an example of a well-documented JavaScript function.</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="cm">/**
* Calculates the sum of two numbers.
* @param {number} a - The first number.
* @param {number} b - The second number.
* @returns {number} The sum of the two numbers.
*/</span>
<span class="kd">function</span> <span class="nx">calculateSum</span><span class="p">(</span><span class="nx">a</span><span class="p">,</span> <span class="nx">b</span><span class="p">)</span> <span class="p">{</span>
<span class="c1">// Returns the result of adding a and b</span>
<span class="k">return</span> <span class="nx">a</span> <span class="o">+</span> <span class="nx">b</span><span class="p">;</span>
<span class="p">}</span>
</code></pre>
</div>
<h3 id="showcasing-collaboration-skills">Showcasing Collaboration Skills</h3>
<p>Another important aspect is gaining collaboration experience. Many development jobs require teamwork, and GitHub’s features are designed to help you work with others on shared projects. You can manage issues, review code, and contribute to open-source software. This valuable experience can be highlighted in your job applications to demonstrate your ability to work effectively in a team environment.</p>
<h3 id="networking-and-community-engagement">Networking and Community Engagement</h3>
<p>Engaging with the broader GitHub community can also be highly beneficial. By contributing to various open-source projects or following repositories related to your interests, you can stay updated with the latest industry trends. This networking can connect you with numerous professionals who might provide job leads or valuable mentorship.</p>
<h3 id="validating-your-expertise">Validating Your Expertise</h3>
<p>Additionally, GitHub offers several certifications that validate your skills. These certifications can add significant credibility to your resume, making you stand out to potential employers.</p>
<h3 id="the-hiring-managers-perspective">The Hiring Manager’s Perspective</h3>
<p>When applying for jobs in web development or related fields, having an active GitHub profile with relevant projects can significantly improve your chances of getting hired. Hiring managers frequently review candidates’ GitHub accounts to assess coding style, problem-solving ability, and a commitment to continuous learning.</p>
<h3 id="more-than-just-code">More Than Just Code</h3>
<p>Ultimately, GitHub is more than just a code repository. It is a comprehensive resource for building a professional presence, demonstrating technical skills, and gaining experience that aligns with employer expectations in technology roles. For adults pursuing vocational training in web development fundamentals, mastering GitHub is a strategic step toward securing employment in the tech industry.</p>
This is the variable content, which is the output from the clipboard that gets piped to this template, so it gets replaced here. And this other information, this is a default number of cards to create is 10 unless I give it another number, but usually I just create 10, and that’s it. The core idea is that I have a bunch of Jinja2 files to do different things like bash-commands, bullet-instructions, enkify-full-conversation, enkify-python-script, generate-google-search-queries. This one is actually really cool.
Further Explorations
Finally, there’s another cool example, which is building command-line tools with Rust. You can leverage the similar approach that I did with Python scripts but to create actual Rust apps that are much faster and more performant to work in the terminal with you. And I have a few examples, and it’s—you can do that for rewriting tools in the terminal, for template variation. It’s—you can go wild with this. And there’s a bunch of fun example use cases that I advise you to take a look, like creating bulk Anki cards, integrating tools like wget, curl, etc., combining commands like mmdc. This is a really fun one, folks. I’m going to show an example. This is a command that I ran. Essentially, with a single command, you generate a full graph in Mermaid style and render that graph, and then you can showcase it.
llm 'create a mermaid graph of a simple decision tree' | mmdc -o graph.png
With this command, what you can do is you can generate directly with a command an image like this: “start -> is it sunny? -> etc.” So that’s pretty cool. And there’s a bunch of—there’s so many things that you can do that go completely wild with this.
The final example I’m going to show, folks, is that you can actually work directly with images to do OCR, scan information, or ask an LLM what this image is, etc. For example, I can say, if I, for example, take a screenshot of something, for example, I’m going to take a screenshot of my browser page with my course, I can run this command, for example:
llm extract-text -i screenshot.png
And there you go, you can extract the text from that image directly. So you can, and you can do things like, “What is this?” and you can do all sorts of things with images. You can go wild, and the APIs are getting cheaper and cheaper, so this you can do that for very, very cheap at the API call cost. And there’s so many other things that you can go on and on, and you can work with PDFs using pdftk, you can convert everything to markdown using a tool called markdown-it, there’s so many things.
Layers of AI in the Terminal
And the last thing I’ll say, folks, is that there are layers for working with AI in the terminal, just like there are layers for working with AI in general.
- You start with basic file creation and editing, right? So
vim,nano, etc. You create files, edit files. - Then you can start piping stuff, like we were showing here, to get output from a file, send it to an LLM, output from your clipboard, etc.
- And then you start running commands by doing things like
eval $(llm '...'), right? For example, this is a way for you to run a bash command directly in the terminal. You’re moving towards more agency and more action in your terminal. - And then you start working with agentic commands like
llm-term,claude-code,Aider,magentic-one-cli, and so on, which are—they go autonomously in your terminal, and they start doing stuff for you, and you have this more supervision control over what’s going on. - And then you go fully agentic terminals like Warp. Warp is like an example that they try to do a fully agentic terminal that essentially you ask for things, and then things just happen in your terminal. Pretty cool, pretty interesting idea.
And I like this set of layers of control because I like actually being more on these three first rather than in these two. But I do like sometimes moving from, let’s say, this to this when I want a little bit more, you know, a little bit faster iteration. And obviously, when I work in Cursor, it’s a merge of having an IDE where I can edit files directly, and Cursor has an agentic mode that can create files, edit, run terminal commands, everything into one. So that’s why I use it. You can also use Windsor for that, which is really powerful, or you can use just basic VS Code with GitHub Copilot has similar capabilities. And I kind of like where this evolution of AI is going with, especially when working in the terminal, and that’s why I did this article.