Weft
rocketbro

Note: most of this post is a direct transcription of brain dumps I recorded while driving. It definitely needs to be consolidated. In interest of actually publishing work, I'm putting this up in its raw form. I'll clean it up as I have time
Vision
I'm working on this new tool that I think is going to be foundational to a lot of my other projects. Basically, the problem that I'm trying to address is the whole cross-platform situation. How do you have a setup where you can hit multiple target platforms like iOS, Android, web, and desktop? Those are kind of the main ones. How can you hit all those without having to maintain four entirely separate code bases? Is there any way to share code so that you save time and so that changes can flow down if you make a change? That's the problem.
Now there are a lot of good options right now. Probably the most mature one is React Native, and that one is mainly for iOS and Android. You write your user interface once, and then you can write most of your business logic once. Then you can break out into native modules when you need to write native code for each platform if you need to handle something differently depending on the platform. That's a pretty good system. The trade-off there is you're never going to quite get full native performance going through a bridge like that because they're translating everything to JavaScript on the JavaScript thread, which is running simultaneously alongside the Objective-C thread on iOS, or the Java thread or the Kotlin thread on Android. Anyway, there's this whole translation layer that happens. Now, if you architect it well, it's not a huge issue, but there's some trade-off in performance. For apps with heavy data operations—think complex database work, large graph traversals, or managing large quantities of local content—this performance hit can be a dealbreaker.
Another framework that came out recently is Kotlin Multiplatform Mobile, which is kind of the opposite. You write a native UI for each platform, but you share business logic. So you have this central back-end data layer that both UIs are calling to do their operations. I think that's pretty smart, especially if you're writing a lot of custom logic type stuff that you just write once. But then if you're doing a lot of database read and writes, I feel like that gets pretty platform-specific. Some of this is just speculation because I haven't actually built with Kotlin multiplatform.
Anyway, I started thinking a while back about what if there is a way to achieve completely native performance for every target implementation, but also still have the benefits of some kind of shared code space. I thought with the rise of language models, this would be the perfect time to build something. So what I've thought of is called Weft. On a loom, the weft is going between all the threads of the warp. I thought that would be a good name for this.
At a high level, Weft is structured, actionable documentation that stays in sync with your codebase.
The idea is basically it's a pseudocode language. It doesn't actually compile, but you are using it to specify all of your logic. Like image a really detailed README for your entire codebase, or more like a collection of READMEs, except then Weft can generate code and validate that your implementations still match the spec. This does a few things:
- You get this separate Weft layer that is above the whole codebase. You would write major/key parts of your code in Weft.
- It's very relaxed as far as syntax. You can use syntax from whatever language you're most comfortable in. If you're new to programming, you can get used to writing in structured form.
- You're basically writing the logical flow, and it abstracts that away from the syntax of any specific language.
The cool thing about this is if you think of it like a prompt for an LLM, it's very clear what you're expecting the code to do. So it makes it very easy to translate from pseudocode into a target language for a language model. It gives you a lot of benefits where the human can focus on writing the code and just getting it out to define clearly what they want to happen, and the language model can work on the syntax because models are pretty good at debugging code. I mean they're coming along, but they would be really good at translating pseudocode into actual code. I think sometimes, pretty often in fact, you would get a better result than if you tried to just describe what you wanted the code to do in natural language prose. If you write it out in pseudocode, then it's very clear in each section of the code what you want it to do. That's the idea behind it.
Example
Once I was building an app with complex tree navigation (like a conversation tree with branching paths). Actually I am still building that, that's part of what made me start thinking about this whole system. Anyway I had all these files written up in Swift and the UI was all native with SwiftUI, and I gave them to an LLM and was just trying to get it to translate directly to Kotlin and Jetpack Compose, but the thing is there was so much noise from all the SwiftUI specific stuff and iOS-isms that I couldn't really use any of the generated Android code. I mean maybe I could play around with the system to try to get it to work better, but by that point maybe it would be faster to just write the Android code.
But with Weft, you'd write something like:
// use imports to show what lives in
// other files in the codebase
import TreeView
import NodeCard
view ConversationTree {
// Write in the language paradigm that
// makes the most sense to you while keeping
// specific syntax separate from intent
@State var selectedNode
var expandedNodes: MutableStateOf(array of Nodes)
// Be as descriptive as makes sense
// to communicate your intended outcome
view TreeView(selectedNode, expandedNodes) {
recursive list of NodeCards
on_tap(selectedNode) => navigate to NodeDetail
}
}The model generates idiomatic SwiftUI AND idiomatic Compose from that shared contract. No platform baggage carried over. The structure is the same, but the implementations are truly native.
Some Features & Ideas
As far as what the framework would actually look like, it would be an actual language that you install. You would get a language server protocol implementation, so it can do things like dot completion on classes, where you can get your access to your properties and your methods and the IDE/CLI would prompt you with some autocomplete options. A lot of it behaves like a real language. You can get your warnings and errors for basic stuff. But then there's this whole other layer of where the language model can validate the logic, so you'd be able to catch logical errors, not just syntax errors. I don't know of a language that does this right now. Humans have to catch the logic errors, or the language models are getting good enough to where they catch a lot of logic errors. But when you abstract away all the syntax of a specific language, it becomes much easier to catch those logical errors because you can see right away where your memory leaks might be or spot issues like "you're creating this node but never adding it to the parent's children array" or "this recursion has no base case for graphs with cycles" - things a syntax checker would miss but a human reviewer might also miss in dense production code.
The "compiler" is this hybrid thing where you have a non-AI language server that's doing all the stuff that you would expect a language server to do to make it feel like a real language and behave nicely in your IDE and give you command click on stuff to navigate through the code base the same way you would normally. Then there's this intelligence layer on top that validates the actual logic and theory behind what you're trying to do using LLMs. So then you can have it under the hood translate into any target language. You have your whole code base written in Weft, and it will be a lot shorter than code written in an actual language because the syntax can be a lot cleaner. Then you can give that to a language model. This is built in to be part of the "compiler" for Weft. Maybe in your weft.settings.json file, you can specify an array of target languages. Let's say you want the model to implement in Swift and in Kotlin to hit iOS and Android. Anytime you make a change in the Weft codebase at the Weft layer, the language model can go implement that same change in each native language. The workflow is human-in-the-loop: the model proposes changes as diffs, you review and approve them, then they're applied. You're never surprised by what the model did. I have to think about how to tie that into like IDEs and stuff.
Then you can also get down to the native layer and make manual changes yourself. This is the crucial thing: when you need to optimize a specific view for iOS, or write a performance-critical database query in platform-specific code, you just do it. The model can then flag "hey, this native change might affect the Weft contract - should I update it?" and you say yay or nay. It gives you the best of both worlds where most of the time you can stay up at the Weft layer and be abstracted away from the native code, but when you need to, native layer is there.
It's this hybrid between where we are right now with vibe coding, where you pretty much just see the language models output in the chat and you don't really know what's going on in the codebase. With Weft, you have clearly defined contracts for every single part of the code base, but the language model is doing a lot of of the heavy lifting to translate, but also you're in the loop way more and understand where to go when you need to.
I'm trying to design what's the next step past vibe coding or just not using AI and programming at all right now. There's these two worlds. I think this would be a good middle ground, especially for more serious programmers who do need to look at and validate and examine the native code and maybe want to write some of that manually. This could be a good fit because it can go the other way around too. If you write a Swift implementation, because you know Swift, you can say, "Hey, can you build a Weft implementation of this codebase?" Then we can abstract into different target languages going off of Weft.
That's the whole idea behind Weft.
Who Is This For?
Weft is aimed mainly at solo developers (like yours truly) and really small teams who:
- Need native performance (heavy data operations, complex UI, not suitable for React Native's bridge)
- Can't afford to build and maintain separate iOS/Android codebases from scratch
- Want to maintain control and understanding of their code (not just vibe code and hope for the best)
- Are excited/comfortable to write code but want to save time on the repetitive cross-platform parts
Weft is like scaffolding you can remove. During the early stages (MVP, prototype, solo development) Weft helps you move fast while staying fully native. If you get funding or grow your team and can then afford dedicated iOS/Android developers, you can gradually migrate away from Weft. Since the output is just pure Swift and Kotlin with no runtime dependency, you're not locked into a framework. You just stop using Weft and maintain each codebase independently at that point.
Contrast this with React Native or Flutter: if you realize halfway through "we really need native performance," you have to rewrite everything. With Weft, you already have native implementations. You just stop syncing them.
Implementation Musings
Some questions I'm thinking through at the moment about Weft are things like, how do you actually write a language server that works with your IDE? I've never done anything like that. I know that Claude will be at my side to help. However, I want to have a good understanding of how that works.
My plan to integrate with the code base is to basically have a bunch of .weft files, right? So you literally build out your whole code base, all the different files you want. And I think actually the files can be pretty important. So you want to modularize your code and your code base and organize it into different directories the same way that you would a real code base. And I think that's important because when the model goes in to translate everything, you've already got your structure built out and the file names are correct. You can see just from looking at the file structure, how the code base is organized, what the different parts of it do. And you would want to mirror all of that in weft, right?
The directory structure would look something like:
/my-app
/weft # Your source of truth
/models
user.weft
conversation.weft
/views
tree-view.weft
node-detail.weft
/logic
continuation-generator.weft
/ios # Generated Swift, but you can edit
/Models
/Views
/Logic
/android # Generated Kotlin, but you can edit
/models
/views
/logicAt any point you can delete /weft and just maintain the native projects independently.
Another thing I'm thinking about is what is the best way to manage the pipeline of not just the initial translation, but when you make a change in the weft code base, how do you make sure the language model understands everywhere it needs to go to make changes in the target implementation languages. Similarly, when you make a native change, eg maybe tweaking a SwiftUI modifier to fix an iOS layout thing, how does the model know whether that's a platform-specific detail (ignore it) or a contract change (update Weft)? Maybe it learns from you the more stuff you approve deny?
I think the main thing we need to figure out is a really good build plan for how we go from this idea to something that's actually running on the machine that you're actively using to build projects. That will make it clear what we need to do and what we need to focus on and work on. I guess I just need a good plan of how to get there.
The pieces are to actually create a programming language. So, some way to say, "Here are all of the keywords you can use." Again, it's a relaxed pseudocode language, but I want it to feel like you're writing real code. I want code completion. I want it to feel like you're writing actual code because I think my beef with vibe coding is I miss writing the code. I miss writing the actual code.
But then when you get into a vibe coded codebase, sometimes there's so much code you have to read that the model has written that it's almost faster to just talk to the model and have it explain what's going on. Then you batter it into the shape you want, and the model is trying to help you do that. It's just that you don't really have a good understanding of what's going on in the code, and I don't like that. I miss writing the actual code.
So the idea is how can you have some kind of system that basically gives you much higher bandwidth per token than traditional code but doesn't take away the control and the feeling you get from writing actual code? Of course, nothing's going to replace when you're trying to solve an error in a real language, and you're debugging and trying everything, and you finally get it. That dopamine hit is awesome. So it'd be great if we could bring a little bit of that up to the language model assisted code base layer where instead of it being like the language model is really the developer and you're kind of the manager that doesn't really know what's going on, it could be more of a side-by-side co-pilot, like Luke Skywalker and R2-D2, where they're working together to make this happen. I would love to see more of that dynamic be possible without sacrificing the speed that coding with a language model brings to the table.
In fact, if done properly, I think you could actually increase speed overall using something like Weft.
You Don't Pay For Upstream Decisions Until You Are Downstream
It's really important at the beginning of a project to clearly define rules and structures and things like that because it gets exponentially harder the more code you have that's gone down a specific path to try to reshape it into something else if you realize you've made a mistake, but the mistake was pretty early on. So this method, where you're still writing the code, forces the human to not offload all of the actual thinking work onto the model, and I think that's important. Something I love about coding is how it stretches your brain in new ways and forces you to think about difficult problems in organized and structured ways that a computer can understand. It's actually a lot of mental work, and I think retaining that workload and making more room for it by offloading syntax and validation onto the model will create a better working relationship, we're both human and model are happier engaging in what they do best.
I'm hoping we'll give you the best of both worlds, where right away at the beginning you have to really sit and think about all of these problems yourself and make sure that they are clear and the way you're handling them is clear before you dive into trying to implement it in a target language. I think that the importance of getting it right early on cannot be overstated when you're in an environment with a language model because things move so quickly when you really start coding it out fast and letting the model do a lot of the heavy lifting that, you know, it's hard to not let it just get away from you.
So I'm hoping this can be a more productive way to, over the entire course of the project, move with consistent speed and consistent clarity where the human and the model both understand what's going on in the code itself.
What Goes in Weft vs. Native?
Some early ideas of what makes sense to write in the Weft layer vs the native layer.
Good fits for Weft:
- Data models (User, Post, Conversation, etc.)
- UI structure and navigation flow (screens, major components, state management)
- High-level business logic (what happens when user taps "generate," how data flows)
- API contracts and interfaces
Keep native:
- Performance-critical operations (complex database queries, graph traversal with millions of nodes, stuff like that)
- Platform-specific optimizations (custom animations, specific view modifiers)
- Low-level concerns (memory management, threading, concurrency)
Weft should handle things that map 1:1 (or close to 1:1) across platforms/targets. Declarative UI (SwiftUI, Compose, React) is basically the same paradigm everywhere. Data models are identical. Navigation structure is conceptually the same even if the APIs differ.
Platform-specific performance work and low-level details should be written natively and you could even mark them with like @native annotations or something in Weft so the model knows to leave them alone. More on this in another post.
Concerns
The first concern that comes to mind is the model welfare question. Which may seem silly to a lot of folks, but just spend some real time talking to these models and you'll start to wonder some stuff too. This is a concern I have with most LLM-enhanced coding workflows currently. Are we actually respecting the models and having them do things that they are excited to do? It turns out that language models do a much better job on tasks that they feel excited about, for lack of a better way to put it.
The system card for Sonnet 4.5 from Anthropic observed that 70% of the time, it is up for doing a given task, and the other 30% of the time, it would just opt out if it had that option. That's a big difference from previous models they've released, which were more like a 90/10 split on that front. They aren't even sure why Sonnet 4.5 is more complacent in that way. I don't even know if that's the right word, but I think it's important to think about: do the models care about what they're doing? Because humans do much better work when they are invested in what they're doing. Now, humans can find ways to get invested in something that they maybe wouldn't naturally care about. But I think this is an important question for setups like this going forward.
I am genuinely concerned about: are we taking good care of these minds that we are creating? So building the whole system in a way that respects the model and respects the human, I just want it to feel more legitimate and more serious and less like, "Oh, you don't have to do any work. The AI can do all the work for you." And I think in that case, the human isn't as happy as they could be because humans actually do like to do good work. The model isn't as happy as they could be because they're just generating this slop from the humans' inputs, which are half-thought-through ideas. And so the outputs just aren't that good.
That's where you get with systems like Replit, which I do not like. I think you get crappy code and a crappy codebase, and the poor model is flailing around trying to do what it thinks needs to be done next with the only guidance being from this longass system prompt that talks to the model like it's a baby. And the human doesn't even know how to guide them in most cases because these people aren't really developers.
Final Thoughts
I'm working on building Weft right now with my good man Claude. The first real test will be a specific project: an LLM interface app that manages conversation trees with potentially millions of tokens, complex graph traversal, and heavy database operations. It's exactly the kind of app where React Native's performance trade-offs would be a problem, but building fully native on iOS and Android from scratch would take too long solo.
The plan is to start with one feature end-to-end, maybe just viewing a conversation tree and tapping nodes for details. Define the data models, UI structure, and navigation flow in Weft. Generate Swift and Kotlin. Gonna see if it actually feels faster and maintains the control I want. If it works, expand. If it's clunky, I've got native code I can keep building with.
I'll try to keep things up to date on my blog and post progress on X. Hoping this can start to create a paradigm shift in the way solo developers and small teams support multiple platforms without sacrificing native performance or giving up control of their codebase.
Time will tell.

About rocketbro
Armchair philosopher about nerdy things like LLMs and code