Show HN: I built a non-linear UI for ChatGPT

12 days ago (grafychat.com)

Hi HN,

I built this out of frustration of the evergrowing list of AI models and features to try and to fit my workflow.

The visual approach clicks for me so i went with it, it provides more freedom and control of the outcome, because predictable results and increased productivity is what I’m after when using conversational AI.

The app is packed with features, my most used are prompt library, voice input and text search, narration is useful too.

The app is local-first and works right in the browser, no sign up needed and it's absolutely free to try.

BYOAK – bring your own API Keys.

Let me know what you think, any feedback is appreciated!

For what it's worth, one CSS line lags the HELL out of my laptop on the site. It's backdrop-filter: blur(0.1875rem) for modals, like the youtube video popup

Sometime ago I had an idea for a similar interface without the dragging feature. Basically, just a tree visualisation. I usually discuss a tangent topic in the same conversation, but I don't want to confuse the AI afterwards, so I edit a previous message when the tangent started. However, OpenAI would discard that tangent tree, instead it would be nice to have a tree of the tangent topics explored, without necessarily having to sort them manually, just visualising the tree.

  • ChatGPT keeps the full tree doesn't it? You can swap back and forth on any particular node last I checked.

    • I haven't seen that. So i have actually built what parent wrote.

      So it seems i did waste time unnecessarily... but where exactly do i find the full tree in ChatGPT convos?

      2 replies →

Interesting take! It does seem to address a typical "intermediate" workflow; even though we prefer linear finished products, we often work by completing a hierarchy first. I've been using Gingko [1] for years, I find it eases the struggle of organizing the structure of a problem by both allowing endless expansion of levels, and easily collapsing it into a linear structure.

In your case, do you hold N contexts (N being the number of leaves in the tree)? Are the chats disconnected from each other? How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

1: https://gingkowriter.com/

  • Great questions!

    > In your case, do you hold N contexts (N being the number of leaves in the tree)?

    It depends, contexts are just a form of grouping

    > Are the chats disconnected from each other? > How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

    RAG with in-app commands, i'm working on a local RAG solution, it's early but promising. Basically chat with all your data and applying a wide range of command on it.

  • > How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

    Why would they, though? For me as a potential user of this (and someone who thought about building a tool like this for myself), the tree (or better, a directed graph) is the desired end result.

  • Slightly OT, but there was a standalone software just like gingko for the Mac. Do you now something about it?

    Edit: I think it was an old version of gingko as a desktop app. Still available at https://github.com/gingko/client/releases

  • > Gingko

    A subscription pricing model for software where everything should stay on my machine is a no-go for me

You can create something like this easily by yourself using Obsidian and a plugin like https://github.com/AndreBaltazar8/obsidian-canvas-conversati...

From watching the demo it looks interesting, but I figure I would get tired of dragging nodes around and looking for ones that I'm interested in. Does it allow searching?

It would be more interesting to me if it could use AI as an agent to create a graph view - or at least propose/highlight followup questions that self-organize into a graph.

  • > I would get tired of dragging nodes around

    Me personally i find value in taking my time to organize and drag around, probably because i'm a visual thinker

The only feedback I would give is I'm suspicious of (will not buy) closed sourced AI anything. With that said: thank you for sloughing off the subscription model trend! That is welcome.

But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this. I'm probably the odd one out for this, so take that with a fair helping of salt.

This is a great idea, so much so that this is also something I could probably put together a MVP of in a weekend (or two) of dedicated work (the fancy features that I personally don't care about would probably take longer to implement, of course...).

Good work! Keep it up.

  • > But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this.

    Is the self hosted option a workable solution for you?

    https://www.grafychat.com/d/docs/selfhost

    Unless it's minified I guess.

    • I would only use this (or any ai) self-hosted if it works 100% offline.

      I would also not want it minified - as I would want the freedom to tinker with it to my personal specifications. Which makes me ask a question: what rights would I have to modify this software, per your license?

      3 replies →

  • Thank you!

    I would love if we had some kind of 'open-build' methodology so those projects not willing to open the source but are willing to perform any kind of necessary audit against the build, just a thought.

I like this and wished openai or anthropic enabled similar in their UIs... it would be simple actually: "create a new chat from here"

Otherwise, great job! It's cool, but it's pricey and that is a personal deterrence.

Good landing page, explained to me the product well enough. I like your concept also as i wished sometimes for something similiar in the past.

The demo you shared shows you are creating child chat from the original parent chat. Have you tried something like connecting merging two child chats to create a subsequent child chat? Or maybe simply creating a child chat from a previous child chat?

something i built as an add on but would be nice to integrate into some of these front ends would be a find replace key: value store to assist in potentially "leaking" something.

if you could replace IPs or domains or subdomains with a filler domain like something.contoso.com and send that to chatgpt instead of my internal domain that would be a feature that I would pay money for.

like i said i have an implementation written in python for this but its an add on to an additional frontend which makes it extra clunky.

I always feel like whiteboarding & concept mapping is better when it comes to generative AI, especially when it comes to the nature that we are chat in a "multimodal way" these days -- just think of old plain text SMS compared to mems links rich-text powered IM tools nowadays.

Congrats! you may also check flowith and ai.affine.pro for similar selling points.

Also, heptabase is good and they will definitely make a ai version soon or later.

Congrats on the launch. I'll take a closer look soon and only played around a bit.

Would be great if you could extend the documentation.

If you're not open sourcing the app, what about at least open sourcing the documentation?

One thing I'd like to extend is on https://www.grafychat.com/d/docs/intro

3. Configure Ollama server to make sure it allows connection from grafychat.

That's not very helpful. Something along the line Set the environment variable OLLAMA_ORIGINS to "https://www.grafychat.com" and rerun "ollama serve". Use your custom host if your using the self-host option.

  ```sh
  OLAMA_ORIGINS="https://www.grafychat.com" ollama serve
  ```

Is not that much more text but makes it way easier for people to go and try out your app with ollama.

  • Thanks for the feedback!

    Open sourcing documentation is an interesting idea.

    > Something along the line Set the environment variable OLLAMA_ORIGINS

    I'll test it out and update the docs.

This is wild! What have you found it most useful for?

Have you tried a more straightforward approach that follows the ChatGPT model of being able to fork a chat thread? I could use something like this where I can fork a chat thread and see my old thread(s) as a tree, but continue participating in a new thread. Your model seems more powerful, but also more complex.

  • This is my daily GPT driver, so for almost anything from research to keeping my snippets tidy and well organized. I use voice input a lot to take my time and form my thoughts and requests, text-to-speech to listen for answers too.

I have to admit, I don't get it. (And I want to be clear that's a personal statement, not an overall comment on the app. It looks quite well done, and if others get value from it, awesome!)

But for me, I'm stuck with questions. What's the point of drawing connectors, there seems no implied data flow? Is this just for you as a reminder of the hierarchy of your queries? Or do you actually set the upstream chat as a context, and reflow the whole thing if you change upstream queries? (That one would definitely be fun to play with - still not sure about long-term value, but def. interesting)

Good luck, and looking forward to see where you're taking this!

  • Seems like organized chatGPT in the form of mind mapping. It’s quite intuitive to me because I’ve had some chats where I kept scrolling back to the first gpt response. Therefore, you can map out a question and answer, then create nodes for follow up about specific details. Each branch of the tree structure can organize a rabbit hole of follow ups on a specific topic.

  • Thank you!

    Like I mentioned earlier for me the app is canvas-based first, node-based second. So connections are a tool, a visual tool to craft or manage prompt to then feed it to LLM. Canvas is a visual tool to organize and keep large amounts of chats.

    I try use LLM not for the sake of chatting, but to get results and those tools seem to help me with that.

    Hope that makes sense.

Looks interesting. I had an idea last year that I never acted on that was down this same path.

The design looks really nice.

At this point, my chats are so brief and infrequent compared to a year ago. The standard UI is more than I need at this point as I never reference back to any past chat.

I also find I seem to get the best answer from the least context. Extra context seems to hurt more than it helps for my uses.

Down the line something like this might be an obvious interface everyone will want to use. Right now though its not even something I need to try.

Congrats though. I love BYOAK.

Very cool! I built a version of this [1], but balked at trying to sell it. This is the third iteration of this idea I've seen so far. Your reply popup is a smart feature and a nice touch! Love it. I love the privacy focus and BYOK, as well.

Congrats on the launch!

Really cool to see graph interfaces for AI having their moment. :)

[1] https://coloring.thinkout.app/

  • Wow, this is really cool! Thanks for sharing!

    • Thanks! Are you able to figure it out?

      Feel free to message me if you're willing to chat about it. Would love to know if it's actually useful for you.

      joshua@huttj.com

Very nice! Thanks for sharing, will definitely give it a try. I think we settled for chat interface to play with LLMs, but there's nothing really holding us back to try new ways.

  • Yeah, I'm annoyed that OpenAI has deprecated its text completion models and API. I think there's a ton of value to be had from constrained generation like what's available with the Guidance library.

Your full-stack dev graph seems to have 75 queries in it.

Please consider providing a demo video showing how this works with code work.

I get the overall behavior, but sometimes code segments can be quite long, or multiple specific sections need to be combined to create additional context.

It would be helpful to see the current baseline product behavior for interaction on a "common" coding task, solving problems in typescript and / or python.

Thank you so much for building this, it's exactly what I was looking for!

Love the license instead of subscription model. Also loved that I can start trying right away without any hassle.

Couple suggestions:

I can't decide between Extended and Premium options. What does "premium support" mean?

Also, it only shows an upgrade option in the check-out page, perhaps it'd be interesting to include it in the FAQ and also the Pricing section.

  • Thank you!

    > What does "premium support" mean?

    Premium option includes prioritized support and access to new features that might be unavailable for other types of licenses.

    I will update the website for more clarity.

A tree visualization like this one would be great as a complement to tabs in web browsing, especially on a monster display.

Amazing work, kudos! Love the canvas, drag'n'drop and line connectors, did you use a library or made it yourself?

Didn't find it in the documentation. How would I go about if I want to self-host it for a small team of like 14 people?

Should I buy licenses for 14 (3x extended) instances, or 1 for all, where everyone can see everyone's conversations or are there accounts? I have a central ollama instance running and also Openai API keys.

Thank you.

  • > How would I go about if I want to self-host it for a small team of like 14 people

    > Should I buy licenses for 14 (3x extended) instances

    Yes that should work. Each license comes with 5 seat/activations. Each seat has its own copy of the data.

This looks really cool. I did not expect to see something I might actually buy but this is something that could be very nice for me :-)

Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

Also, is there (or plan to be) support for postgres or other database for persistence?

  • Thank you!

    > Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

    No sources, just a folder with compiled assets that you can run on a static server. This is already available.

    > Also, is there (or plan to be) support for postgres or other database for persistence?

    Yes there are plans for local pg

nice, something I didn't know I needed :D

might want to increase font weight in the pricing section, it's hard to read

also in "How much does it cost?" I think you should also add the Free option (for those like me who missed the Try For Free button at the top)

Do you plan to open source it? I will love to extend it. I had similar ideas about non linear UI.

Wow. I was so frustrated with chat that I was almost going to write something like this myself. Now I don't have to :)

Curious about the business model here though. How much sales have you had so far, if you don't mind me asking?

Curious why you settled on the BYOAK approach rather than a subscription approach

  • Subscription fatigue is real :)

    • I have to say, I didn't realize it was no subscription before I saw this comment. Makes it much more interesting from the start.

      Yes, I hate subscriptions. Love your approach.

      I also love that you focus on your strength which is the intuitive and flexible interface, rather than LLM or prompts or whatever. Like this its also very extensible, as every good tool should be.

    • I was thinking it was because it would be easier than keeping track of usage which I assume you would need to do with a subscription based model i.e. all users using your key.

      1 reply →

Congrats on the launch - I love this. Organizing text is often the hard part when working with LLMs.

Only thing I don't love is heavy mouse use. Are there keyboard shortcuts for all the operations shown?

  • Thanks!

    > Are there keyboard shortcuts for all the operations shown?

    For now yes, what would you like to be added?

Make sure to have very tight limits on any API key you provide to someone else. They could burn through tens of thousands of dollars each day if you do not have security in place.

It looks like you put a lot of work into this but node based workflows are ok when they're a necessary evil and just an evil otherwise.

I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous prompts/etc out of its store ranked by relevance, and I can just click on all the things that I want inserted into my context with a warning if I'm getting near the context limit. I waste a lot of time finding the relevant code/snippets to paste in manually.

  • This sounds a lot like my dream setup, We've been slowly building something along those lines. I've linked a video at the bottom that shows how we did something similar with an Obsidian plugin. Hit me up if you're interested in more details, we'd be happy to get an alpha user who gets it.

    We've mostly had trouble explaining to ppl what exactly it is that we're building, which is fine, since we're mostly building for us, but still it seems like something like this would be the killer app for LLMs

    Obsidian Canvas UI demo -> https://www.youtube.com/watch?v=1tDIXoXRziA

    Also linking out Obsidian plugin repo in case someone wants to dive deeper into what we're about -> https://github.com/cloud-atlas-ai/obsidian-client

  • > I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous

    This is something very similar to what i'm planning to add next, so stick around.

  • Well here’s a somewhat limited version of your idea and really only helps mitigate the copy/paste effort with coding: https://github.com/backnotprop/prompt-tower

    My original idea was a DnD interface that works at the os level as a HUD… and functions like your idea but that is not so simple to develop.

  • For me this interface is canvas-based first, node-based second, meaning sometimes I might not even use connections to get my desired result from LLM but i have the place and form for the result and i know how to find it. Connections here are not set in stone like in mind mapping software for example, it's a tool.

Can you go get acquired by Phind please? Brainstorming with the robots is a non-linear activity and I believe you are on the right track.

It seems to work well but a desktop app (or self hosted) is essential. I can't paste in valuable API keys to a third party website.

  • Desktop app is coming soon and self-host option is already available as a part of Extended License.

    I have no plans to open source it at the moment, but it would be great to come up with something like 'open build' for cases like that.

    • The purchase screen made me think self hosted was coming soon for extended. How far off is desktop and will the desktop be self-hosted or an interface to the website ?

      1 reply →

Congrats on the launch! I love that you let ppl try it without even signing up! The mobile experience needs to work tho.

Powerful stuff, this is the kind of workspace I've been waiting for for AI. Excited to see how it evolves!

I wanted the same for myself but balked at the amount of work I'd need to do to implement it :)

Great job!

This is interesting and all, but it's a tad complex to use. AI is supposed to simplify your life, but this just ends up making things more complicated.

Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

  • Suppose I have a conversation with ChatGPT about a macro, or better yet, a series of macros. We reach the 10th sub-module, but suddenly, I find a bug in module 2 (20 minutes ago chat). While I could redirect the chat back to module 2, it's a bit convoluted. Ideally, I'd want to return to an earlier point in the conversation, resolve module 2, and then continue where we left off. However, if I update my response from 20 chats ago, I risk orphaning the rest of the conversation. The response lag also complicates things because I might move on to new ideas or debugging tasks in the meantime. I suppose I should say because of the lag time, I’m not in sync with the chat, that lag affords me the opportunity to keep doing other things. If the chat was more like groq maybe it would be less the case - not sure.

    The other thing I find is that if I change how I replied/asked, I get a different answer. I like the idea I can fork this node and evaluate outcomes based on my varied inputs. You’re right it’s hugely more complex. But its complexity I think I'd love to have available.

  • > Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

    This is the absolutely ideal state of the product, i agree.

i wish perplexity had a similar ui option. so I can out my research in multiple paths.

I built a similar demo to this but for images - IMO this is a much better structure for working with LLMs as it allows you to really riff with a machine instead of feeling like you need a deterministic "next step"

https://youtu.be/k_mJgFmdWWY

  • Sweet demo, you should do a Show HN! This is much more interesting to me, as the visual element makes much more sense here rather than just putting entire paragraphs in nodes.

    • The text nodes is also interesting, it's like a mind map, I can see how it could be great for learning, planning, collaboration, exploring...

  • Looks good, I tried it out and it is indeed alpha in many regards(e.g. sometimes it does not save a picture on windows, sometimes it does not show the prompt, ..) , but the idea has potential. I would encourage you to keep working on it (and maybe keep in mind, that if this suddenly gets viral and you have no API limits in place, you might get poor quickly).

    • Yeah the idea was mostly to put a stake in the ground for an early UX experiment (I released it last year), but it's been in the back of my mind as something to continue experimenting with and honestly rebuilding for web in the custom game engine I'm working on.

  • Looks amazing! The Unity client is quite sleek. I'd wager the creative play can be taken to the next level with a low-latency model like https://fal.ai/models/fast-turbo-diffusion-turbo

    • What I really want to do is make it model agnostic. SDXL was an easy choice at the time, but you could really easily just make it be a local model or any hosted visual model with an endpoint. The core idea is just tying an LLM to an image model and tying those to a force-directed graph, so really anything could be an input (or an output - you could also do it with text)

  • Great stuff! That deterministic "next step" is the last line of defense for us humans :)

[flagged]

  • This is the worst kind of feedback comment.

    It's a damn ChatGPT front-end, do you expect it to be written in PHP?

    Also, this site plays perfectly fine with uBlock Origin. If you're going to throw criticism out, at least verify that what you're saying is correct.

    Honestly, reading your comment history, you really should be aware of the fact that most of what you're putting on this site is at least not positive, if not actually negative.