I’m interested in automatically generating lengthy, coherent stories of 100,000+ words from a single prompt using an open source local large language model (LLM). I came across the “Awesome-Story-Generation” repository which lists relevant papers describing promising methods like “Re3: Generating Longer Stories With Recursive Reprompting and Revision”, announced in this Twitter thread from October 2022 and “DOC: Improving Long Story Coherence With Detailed Outline Control”, announced in this Twitter thread from December 2022. However, these papers used GPT-3, and I was hoping to find similar techniques implemented with open source tools that I could run locally. If anyone has experience or knows of resources that could help me achieve long, coherent story generation with an open source LLM, I would greatly appreciate any advice or guidance.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    This is why buying books on Amazon now requires checking the author’s background to avoid buying AI slop. I never thought I’d see the day, but it became clear to me last summer.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I am confused as to why you’re going through all this struggle! You’ll get the same results just copy-pasting big chunks of other books that humans have already put time and effort into writing :) best of luck!

  • november@lemmy.vg
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Why do you want to do this? What is your end goal?

    If it’s to read a story, there are already more stories in the world than you could hope to read in your entire lifetime. Written by humans, with actual intention behind them, guaranteed to be coherent.

    If it’s to create a story, well, you’re not creating anything by having an LLM do it for you.

  • kent_eh@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    Thw 100k word part is relatively easy.

    The coherent story part is not possible with today’s LLMs, even with a much smaller word count.

    Hell, lots of human writers fail at making their stories coherent.

  • simple@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    You need to use an LLM with a very long context length, potentially 1 million+ tokens. I don’t know if any local LLMs can even go that far, and if they can, you’ll need an outrageous amount of ram and vram.

    But honest question… Why? If you’re planning on generating fake books or stories, it’s not going to happen, you’ll create the most generic barely coherent text.

    And fair warning, if you’re trying to sell AI generated stories you’ll quickly be permabanned from any store, so don’t even try it.

  • Deestan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    This concerns me:

    stories of 100,000+ words from a single prompt

    An LLM excels at making passable derivative work. It does not, by definition, come up with original ideas.

    What are you going to do with 100,000+ words of 100% derivative writing where anything potentially original can be summed up in a prompt of a few dozen words?

    Will this be published or sold somewhere? Undercutting or crowding out original works?

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      You think Humans aren’t pumping out 100% derivative works all the time?

      Like every shitty romance novel published. There’s only so many ways a man can woo a woman, they just change the location, randomize the set of actions from a list of things men can do to turn women on, throw in something to harm the relationship, and then come up with a set of names.

      • Deestan@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        You think Humans aren’t pumping out 100% derivative works all the time?

        Don’t worry. I don’t think that.

        A big hope I have for AI is that 100% derivative work by humans is now easier to call out. If a rock with a 9V battery could produce it, why should we value it?

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      This is a cool way to put it, but I think even just errors and randomness in reproduction of source ideas sometimes can count as original ideas. Nevertheless, I also think it doesn’t fully encompass all range of mechanisms by which humans come up with original ideas.

      • Deestan@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 days ago

        Randomness can give novel combinations, sure, but we shouldn’t call than an original idea.

        As for the various ways humans come up with original ideas, they are based on a level of reflection, reasoning and thought processing. We know that’s not possible for an LLM: while they are complex in their details, the way they work is very well defined. They imitate.

        • hisao@ani.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          I agree with this in terms of process, but not necessarily agree in terms of result. If you enumerate the state space of target domain, you might realize that all the constructions there can be achieved by randomly introducing errors or modifications to finite set of predefined constructions. Most AI models don’t really work like this from what I know (they don’t try to randomize inference or introduce errors on purpose), otherwise they could probably evade model collapse. But I don’t see why they can’t work like this. Humans do often work like this though. A lot of new genres and styles appear when people simply do something inspired by something else, but fail to reproduce it accurately, and when evaluating it they realize they like how it turned out and continue doing that thing and it evolves further by slight mutations. I’m not saying I want AI to do this, or that I like AI or anything, I’m just saying I think this is a real possibility.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    LLM generations of that length tend to go off the rails - I think generating it in chunks where you can try and guide the model back onto the rails it probably a more sane technique.

    There are several open source llms to lean on - but for long generations you’ll need a lot of memory if you’re running it locally.

  • ChasingEnigma@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    6 days ago

    Creating a 100,000-word coherent story using an LLM with a limited context window requires strategic planning in how you manage the narrative flow, continuity, and character development over multiple sessions. Here’s a strategy tailored for this scenario:

    1. Detailed Plot Outline:

      Expand the Outline: Break down the story into smaller, manageable arcs or segments (e.g., each act could be split into several chapters). Each segment should have its own mini-outline: Major plot points Character development for that segment Setting changes Key interactions or conflicts Micro-Outline for Each Chapter: For each chapter within these arcs: Opening scenario Middle conflict Resolution or cliffhanger Character arcs within the chapter

    2. Session Management:

      Context Management: Due to the limited context window, you’ll need to manage how much information is retained from session to session: Summarize Previous Content: Before each new prompt, provide a concise summary of the previous narrative sections. This summary should include: Key events Current state of characters Unresolved conflicts or mysteries Setting and time Prompt Structure: Start with a Summary: Begin each prompt with a summary:

       Previous chapter summary: [insert summary here]. Now, write the next chapter where [describe the key elements from the micro-outline].
      
       Specify Tone and Style: If the story has a specific tone or narrative style, remind the LLM of this:
      
       Maintain the [tone/style] from previous chapters. 
      

      Length of Each Segment: Estimate how many words you can comfortably fit into one session. If your LLM can handle around 2,000 tokens (which could be around 1,500 words, depending on the model), you might aim for each session to produce a chapter of 1,500 words.

    3. Continuity and Cohesion:

      Character Consistency: Keep a running document of character details, relationships, and developments outside the LLM context. Use this to ensure consistency: Character sheets Timeline of events Plot Devices: Use recurring elements or plot devices to maintain cohesion: Recurring themes Foreshadowing elements from earlier segments Feedback Loop: After each session, review the output for: Continuity errors Character voice consistency Plot holes

      Use this feedback to adjust your next prompts or summaries to address any discrepancies.

    4. Incremental Development:

      Iterative Refinement: As you generate content, refine your prompts based on what works…

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 days ago

          I think it’s impossible then. My experience aligns with these recommendations. First tell it to come up with interesting story ideas. Then pick one. Have it write an outline. Have it come up with story arcs, subplots and a general structure. Chapter names… Then tell it to write the chapters individually, factoring in the results from before. Once it trails off or writes short chapters, edit the text and guide it back to where you want it to be.

          It’ll just write bad and maybe short stories unless you do that. I mean you could theoretically automate this. Write a program with some AI agent framework that instructs it to do the individual tasks, have it reflect on itself, always feed back what it came up with and include it in the next task.

          I’ve tried doing something like that and I don’t think there is a way around this. Or you do it like the other people and just tell it “Generate a novel” and be fine with whatever result it will come up with. But that just won’t be a good result.

        • november@lemmy.vg
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          Looking through your past comments on Lemmy the only other thing I can see is this:

          I wish there was some feature in the works to let me see less memes and US politics without having to block or subscribe to a bunch of communities.

          You’re just not interested in doing anything at all for yourself, huh? You just want to sit there and mindlessly consume whatever shows up in front of you?

        • Rayquetzalcoatl@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 days ago

          “I don’t want to do or revise anything by hand” AI dorks are wonderful. You gonna get an LLM to read the thing for you, too? 😂