• Kalcifer@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    Huh. That’s actually kind’ve a clever use case. I hadn’t considered that. I presume the main obstacle would be the token limit of whatever LLM that one is using (presuming that it was an LLM that was used). Analyzing an entire codebase, ofc, depending on the project, would likely require an enormous amount of tokens that an LLM wouldn’t be able to handle, or it would just be prohibitively expensive. To be clear, that’s not to say that I know that such an LLM doesn’t exist — one very well could — but if one doesn’t, then that would be rationale that i would currently stand behind.

    • unknowing8343@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      I understand, but I wouldn’t be surprised to see some solution out there that could maybe feed the AI chunks of code without context… It may still be able to detect “hey you told me this software is supposed to do X and here it seems to be doing Y”.

      I guess we’ll have to wait a couple of years for these tools to be accessible and affordable.