CodeWalker™ AI

CodeWalker™ AI combines CodeWalker™ with a large language model, to perform some truly futuristic feats. It can show you exactly how an AI has modified your code, and it can recursively apply your prompts, indefinitely.

This is only phase 1 of our AI roadmap

See CodeWalker™ AI in Action

CodeWalker™ AI Setup

CodeWalker™ AI Overview And Setup

CodeWalker™ AI currently uses a local large language model, hosted on your own network. This means your code is always On-Premises and secure. No third party has access to it, no AI models are being trained on it.

CodeWalker™ AI is very easy to set up. This document explains how to set it up to use a local large language model, hosted on your own network! Just follow the steps below.

This is assuming you have downloaded CodeWalker for whatever platform you plan on using it on. You can run an unlimited number of CodeWalker instances on your network, but AI speed will be limited by your LLM's hardware, so we recommend running the LLM on a separate device.

Ollama Settings

1. Install Ollama Or LM Studio

You will need a LLM that adheres to Ollama specs. For obvious reasons, we recommend Ollama. LMStudio also works well though!

Install Ollama by clicking here

Or Install LMStudio by clicking here


Install one of these, pick a model that can run on your system, and set it up to work on your network (it's in Settings, see photo next to this).

For example, if you're using Ollama, change "Expose Ollama to the network" to "on",
and note the local IP address of this system.

Note that LMStudio uses port 1234,
while Ollama uses port 11434.

Why? In Leetspeak, ollama = (0)11434



Also, we recommend setting "Context Length" to something larger than the default, so you can work with more source code at one time
User Data Folder

2. Your Source Code Folder

Next we need a little sandboxed area to run in. Create a folder somewhere. In our tests we use a folder called CWAIData on the desktop.
And put your source code in there.

CodeWalker™ AI can overwrite and delete files in the folder you give it access to, so PLEASE don't use your existing source code repo.

Intelligence Settings

3. CodeWalker™ AI Settings

With all of that set up, navigate to the AI screen and enter your info.

This will need to include the local IP address and port of your locally hosted AI, as well as the path to your data folder.

When you're ready, click the connect button. If it's all set up properly, it should connect to the local LLM and this screen will change

User Data Folder

Connected!

If it looks like this, you're connected! You should also see any existing source code already in your folder, parsed and displayed. If you're starting with an empty folder, it will just show the folder

Intelligence Settings

4. Enter Promps

Now that you're connected, give it a prompt.

You are connected to the model you've chosen. Try something simple at first, to get the hang of it.

The word "Thinking..." will appear at the bottom of the screen while it's waiting for the LLM to finish

In this first version, you just have to wait a bit. You are limited by the speed of your language model and the speed of your hardware.

Note that you can change your prompt any time. You don't need to click the Send button, it will pick up the new prompt when it finishes the current one, if Recurse is on.

User Interface

User Interface

While you're waiting, poke around! Right click to move around, alt+right click to rotate, and left click to select something.

If you select a folder or file, you can open it using the Open button. If you click a file, you can open it in CodeWalker Microscope™.

Note that in this version, if you leave this screen by going to Microscope, it will stop the connection, and you'll need to start over. We'll change this soon!

Intelligence Settings

5. Think Recursively

Try to think of a prompt that is open ended enough that it can be run constantly. Think in terms of how human language would translate into results

For example, make something "better" is rather vague. "More fun" would likely draw on the language model's concept of game loops and flow.

While "change the number of lives to 5" is very specific, and will result in identical results each time

Once you have something you want to try, click the recurse button, enter the prompt, and click the Send button.

Then just let it run.

User Interface

LLM Notes for the User

There are three tabs on the right. User will display any notes the LLM want to tell the user.

Info is diagnostic and logging info about this session, like when it started, and how many tokens were used.

File should just contain the directory to the currently selected file.

User Interface

6. In Progress

You can leave this running. If you click the folder icon on the ground, and open that (button in the lower right panel), it will open the cache folder CodeWalker™ is using to store code while it runs.

If you walk up a few directory levels, you can view all of the steps CodeWalker is taking, including the exact prompt it's using in this session. If you go up a few more levels you'll find a Prompt folder

That folder contains the master prompt that CodeWalker will include along with your other prompts. Feel free to change it and experiment with it to get different results.

User Interface

7. When you're done

Software is never done.


But if you must quit, just leaving this screen will end the session. Or if you want to wait for the next revision first, simply uncheck the Recurse button and wait for it to finish.

Understand Code Like Never Before

Start using CodeWalker™ today and transform how you review, understand, and manage code changes.

View Pricing Start Free Trial