Skip to main content

How I Built an LLM-Powered CAD System for Bicycle Highways

LLMs (Large Language Models) are powerful tools. For many individuals, verbalizing desires is often more straightforward than navigating a complex GUI. I used the "Building LLM-Powered Apps" Course as an opportunity to replace our traditional track building system with a human language prompt.
Created on August 4|Last edited on September 4
Illustration of how a urb-x track could look like in a dense urban environment like New York
NOTE: This is a community submission from Lucas based in part on what he learned from our on demand Building LLM-Powered Apps course. You can sign up for free here!
💡

Introduction and Setup

I am employed with urb-x.ch, a company specializing in the construction of bicycle tracks. Our tracks are uniquely designed using modular building blocks that can be combined in various ways. When pieced together, the entire track network forms a tree structure of these building blocks.
This task can be seen, in some ways, as a video game sandbox. Think of it as building tracks in Rollercoaster Tycoon or Trackmania. However, while these games offer an intuitive design experience, they don't provide the level of precision necessary for civil engineering projects. On the other hand, conventional CAD programs do not allow modular systems like this on a large scale.
We've got a no/low-code system in place that lets you whip up tracks easily, built on the back of Rhino and Grasshopper. The catch? You've got to invest a bit of time getting familiar with the available components.
Interacting with the system, like most video games, can be seen as a transaction in a database system. To this end I designed a simple json/ pydantic schema to store the configuration of a track. Now updates become simple updates of the configuration. Let's refer to this configuration as "TrackConfig."
{
"version": "0.0.2",
"library_dir": [
"/Users/lucas/projects/gh/"
],
"origin": {
"position": [
0.0,
0.0,
0.0
],
"rotation": [
0.0,
0.0,
0.0
]
},
"global_upgrades": {
"PV": true,
"HEATING": true,
"FE3": {
"LR": false
}
},
"instance": {
"id": "Oliver",
"part_name": "XC50-4L",
"children": {
"-1": {
"id": "Liam",
"part_name": "XC50-4R",
"children": {
"-1": {
"id": "Oliver",
"part_name": "XC50-4R",
"children": {
"-1": {
"id": "Ethan",
"part_name": "XC50-4R",
"children": {}
}
}
}
}
}
}
},
"configs": {},
"selected": "Oliver"
}
As a project I now use the new OpenAI functions to parse model output/ actually restrict model output to match the intended format.
Initially, I aimed to directly generate new TrackConfigs. This worked quite well, but with the significant downside of extremely large latency. Still, it wasn't bad: maybe 15 seconds, which is still faster than using the GUI for most parts. All told, I find that unacceptable for an iterative process and wanted to speed things up. I log everything with W&B Prompts which show the latency and many other metrics. They work extremely well, especially for drilling down on model results!
To tackle the latency challenge, my strategy was to reduce the required output from the model. The outcome of this approach was the integration of simple update functions.
from typing import Any, Optional, Union

from PythonModules.schema import TrackConfig, Part, PartNames

from pydantic import BaseModel, Field

class UpdatePart(BaseModel):
"""Updating or creating a part with the given id and part_name"""

reasoning: str = Field(..., description="Answer all reasoning questions")
id: str = Field(..., description="The unique id of a part. This is a unique random english first name.")
part_name: PartNames = Field(..., description="The name of the part e.g. 'XS12'")
parent: str = Field("", description="The id of the parent part or '' if the part is the root part. Usually This is the current selected part.")
index: int = Field(-1, description="Where to insert the new part. -1 means append.")

def __call__(self, track_config: Optional[TrackConfig] = None):
return #code ommited


class UpdateSelection(BaseModel):
"""Select the part with the given id"""

reasoning: str = Field(..., description="Which id's parts could be selected? Why does each part make sense or not?")
id: str = Field(..., description="The unique id of a part. This is a unique random english first name.")

def __call__(self, track_config: Optional[TrackConfig] = None):
print(f"UpdateSelection(reasoning={self.reasoning}, id={id})")
return #code ommited

# Other possible updates
While LLMs are powerful, they have a consistent drawback: zero-shot predictions can be unreliable. I'm aware that outputs with reasoning are far more reliable. Unfortunately, there aren't any LangChain chains that offer a combination of reasoning followed by OpenAI function calls.
My admittedly hacky solution—scratch that, engineering solution 😉—is to mandate a 'reasoning' field in the function calls. A prime example of this is illustrated in the pydantic schema discussed earlier.
This tactic, while unconventional, has proven invaluable and could be a key takeaway for similar applications.
💡
Now, on to latency. Without the reasoning component, latency hovers around 1.5s, which is impressively low. With the addition of reasoning, latency varies depending on the depth and amount of reasoning I incorporate. Essentially, the added runtime scales linearly with the number of reasoning tokens. Fairly straightforward there. Most often, I achieve a latency of less than 4 seconds, although optimizations are ongoing and I'm hoping to get that lower.
Thanks to W&B Prompts, I can craft my prompts and observe how other users interact with my tools. This insight enables me to analyze the efficacy of both of our prompting, leading to refinements such as improved system prompts for enhanced predictions.
💡
Let's dive into the prompt business. With the rather large context size of GPT-3 Turbo and my rather tiny number of to prediction token, I've got some room to play around with, so I can splurge on my prompt.
I've chucked in everything I thought was important, keeping it neat and tidy. Check out the setup I went with below:
system_template = """
You are a world class assistant, helping an engineer to build an amazing street network from prefefined parts. The whole system is modular.
You are responsible for deciding on which updates to do to the underlying configuration. The engineer will then update the configuration based on your input.
You get access to a pointer named 'selected'. This id shows you relative to what you probably should append things to.
The whole track configuration is stored in a json file. In the instance key a tree of parts exists. The decendant of a child is the "children" field. The main child has key "-1".
The whole track consisting of a tree should make it easier to describe on where to append things.

The configuration is based on the following pydantic schema:

class PartNames(str, Enum):
Straigt30mElement = "XS12"
CurveRight50mRadius10mLenthgElement = "XC50-4L"
CurveLeft50mRadius10mLenthgElement = "XC50-4R"
DeletedElement = "Deleted"

class Origin(BaseModel):
position: List[float] = Field(..., description="The origin position coordinates (x,y,z))")
rotation: List[float] = Field(..., description="The origin rotation angles in degrees (x,y,z)")
class Part(BaseModel):
id: str = Field(..., description="The unique id of a part. This is a unique random english first name.")
part_name: PartNames = Field(..., description="The name of the part")
children: Optional[Dict[str, 'Part']] = Field({}, description="Dictionary of child parts. Key=='next' is the default next part")

Part.update_forward_refs() # This updates Element to include the recursive reference to itself

class TrackConfig(BaseModel):
version: str = Field(..., description="The version of the track configuration")
library_dir: List[str] = Field(..., description="List of paths to the libraries")
origin: Origin = Field(..., description="The origin position and rotation of the track")
global_upgrades: Dict[str, Any] = Field(..., description="Global upgrades for every element")
instance: Part = Field(..., description="Part instances of the track")
configs: Dict[str, Dict[str, Any]] = Field(..., description="Configuration dictionary for the track configuration")
selected: str = Field(..., description="The id of the selected Element")

For new elements think of a new english first name as id! Think of a random name.
If not specified otherwise append or add new parts to the selected part. Never add new parts to a parent as a different child then -1, unless specified otherwise!
If a parent has other children it usually is asumed that the child with key -1 is the main child and that you should append to that one or one of its children.

Altough also visible in the schema, the following part_name's are available:
Straigt 30m Element: "XS12"
Curve Right!! 50m Radius 10m outer Length Element: "XC50-4L"
Curve Left!! 50m Radius 10m outer Length Element: "XC50-4R"

Reasons for the update fields should answer the following questions! You have to answer all of them:
- Which id do you want to update or is it a new id?
- What part_name do you want to use?
- Why this part_name?
- What parents could you use?
- If the parent already has children, why do you want to replace main child?

"""

prompt_template = """
The current configuration is as follows:
--------------------
{track_config}
--------------------
Use the given format to pick update steps from the following input:
--------------------
{input}
--------------------
Tips: Answer all questions.
Tips: Make sure to answer in the correct format!!!
"""
Now let's look at the model invocations:
class Track:
def __init__(self, llm, system_template, prompt_template):
self.path = "track.json"
self.new_config = None
prompt_msgs = [
SystemMessage(content= system_template),
HumanMessagePromptTemplate.from_template(prompt_template)
]
prompt = ChatPromptTemplate(messages=prompt_msgs)

self.chain = create_openai_fn_chain([UpdatePart,UpdateSelection], llm, prompt, verbose=False)

@property
def track_config(self):
if not self.new_config:
data_dict = json.load(open(self.path, "r"))
track_config = TrackConfig(**data_dict)
self.new_config = track_config

return self.new_config
@track_config.setter
def track_config(self, track_config: TrackConfig):
self.new_config = track_config
def store(self):
json.dump(self.track_config.dict(), open(self.path, "w"), indent=2)
def agent(self):
# interactively update the track_config
# asks for user input and calls the callback function

while prompt:=input("How should the track be updated? to quit press enter without input."):
self(prompt)
# Possibly wrap the call in retries if the to predict format is very difficult
# @retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
def __call__(self, input) -> Any:
json_str = json.dumps(self.track_config.dict(), indent=2)
function:Union[UpdatePart,UpdateSelection] = self.chain.run(input = input, track_config=json_str)
# call function with the track_config
res = function(self.track_config)

# store the new track_config
self.store()
return res

model = "gpt-3.5-turbo"
llm = ChatOpenAI(model=model,temperature=0.9)
track = Track(llm, system_template, prompt_template)
Now we either use the track.agent() method to consecutively update the config. Or by using single commands.
track("append a new right element to the track")
track("Add a straight part to the last added track")
...

How I Used W&B Prompts

Using W&B Prompts here is mainly for logging and seeing how other folks are using my tool. It's a trip seeing the different assumptions people have when they're building tracks, and also catching what wacky assumptions the models come up with. You can spot these model assumptions right in the "reason" field of the function call.
And man, I'm all for how hassle-free this was. Just set an environment variable, and bam, you're good to go.
# we need a single line of code to start tracing LangChain with W&B
os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
Alright, real talk: right now, my chain's just a one-step dance. If I could split it up into stages, especially separating out the reasoning, that'd be slick. But hey, it's a work in progress. I'm keen to see how others roll with my tool when they're laying down tracks.

Run: ruby-fog-11
1


Results!

Here's how the track shapes up in CAD.
Note: We're just showcasing the track here. All the extras like houses, streets, and bridges are out of the picture for clarity.

Next Steps

  • Time to jazz things up! I'm aiming to add config updates for dynamic placement stuff - think lamps, pillars, and some flashy upgrades like roundabouts. Good news? The schema's ready for it.
  • I've got my sights set on that multi-stage chain.
  • Alright, here's the deal: the model kinda trips when it faces deeply nested structures. Might need to chop up the tree and go more linear between those tree nodes.
  • And, for the cherry on top, I wanna make the setup a no-brainer for those who aren't techies ;)
Big thanks for sticking around and reading!

Iterate on AI agents and models faster. Try Weights & Biases today.